Code Optimization – Part 2

Code Optimization – Part 2

By Pete Goodliffe

C Vu, 34(4):3-4, September 2022

Pete Goodliffe continues to investigate program optimization and how to write efficient code.

Technological progress has merely provided us
with more efficient means for going backwards.
~Aldous Huxley

In the first article of this series we looked at what ‘optimization’ of a software is, and why it matters. That’s the theology of optimsation. In this month’s article we’ll start to dig into practical techniques for program optimization. Buckle your seatbelts…

The nuts and bolts

So how do you optimize? Rather than learn a list of specific code optimizations, it’s far more important to understand the correct approach to optimizing. Don’t panic; we will see some programming techniques later, but they must be read in the context of this wider optimization process.

The six steps for speeding up a program are:

  1. Determine that it’s too slow, and prove you do need to optimize.
  2. Identify the slowest code. Target this point.
  3. Test the performance of the optimization target.
  4. Optimize the code.
  5. Test that the optimized code still works (very important).
  6. Test the speed increase, and decide what to do next.

This sounds like a lot of work, but without it you’ll actually waste time and effort and end up with crippled code that runs no faster. If you’re not trying to improve execution speed, adjust this process accordingly; for example, tackle memory consumption problems by identifying which data structures are consuming all the memory and target those.

It’s important to begin optimization with a clear goal in sight – the more optimization you perform, the less readable the code becomes. Know the level of performance you require, and stop when it’s sufficiently fast. It’s tempting to keep going, continually trying to squeeze out a little extra performance.

To stand any chance of optimizing correctly, you must take great care to prevent external factors from changing the way your code works. When the world is changing under your feet, you can’t compare measurements realistically. There are two essential techniques that help here:

Optimize your code separately from any other work, so the outcome of one task doesn’t cloud the other.


Optimize release builds of your program, not development builds.

The development builds may run very differently from release builds, due to the inclusion of debugging trace information, object file symbols, and so on.

Now we’ll look at each of these optimization steps in more detail.

Prove you need to optimize

The first thing to do is make sure you really do need to optimize. If the code’s performance is acceptable, then there’s no point in tinkering with it. Knuth said (himself quoting C.A.R. Hoare): “We should forget about small efficiencies, say about 97 percent of the time: Premature optimization is the root of all evil.” There are so many compelling reasons not to optimize that the quickest and safest optimization technique is to prove that you don’t need to do it.

You make this decision based on program requirements or usability studies. With this information you can determine whether optimization takes priority over adding new features and fixing bugs.

Identify the slowest code

This is the part that most programmers get wrong. If you’re going to spend time optimizing, you need to target the places where it will make a difference. Investigations show that the average program spends more than 80 percent of its time in less than 20 percent of the code [1]. This is known as the 80/20 rule (although some go so far as to claim this should be the 90/10 rule). That’s a relatively small target that is very easy to miss, which means you might waste effort optimizing code that’s rarely run.

You might notice that a part of your program has some relatively easy optimizations, but if that part is seldom executed, then there’s no point in optimizing – in this situation, clear code is better than faster code.

How do you figure out where to focus your attention? The most effective technique is to use a profiler. This tool times the flow of control around your program. It shows where that 80 percent of execution time is going, so you know where to concentrate your effort.

A profiler doesn’t tell you which parts of the code are slowest; this is a common misconception. It actually tells you where the CPU spends most of its time. This is subtly different: all code runs at a fixed rate, based on the speed of the CPU clock, the number of other processes being juggled by the OS, and the thread’s priority. You have to interpret these results and use your brain. The program might spend most of its execution time in a few perfectly valid functions which cannot be improved at all. You can’t always optimize; sometimes the laws of physics win.

There are plenty of benchmarking programs around: many excellent commercial programs, and a number of freely available tools. It’s worth spending money on a decent profiler – optimization can easily eat into your time; this is also an expensive commodity. If you don’t have a profiler available, there are a few other timing techniques you can try:

  • Put manual timing tests throughout your code. Make sure you use an accurate clock source and that the time taken to read the clock will not affect program performance too much.
  • Count how often each function is called (some debug libraries provide support for this kind of activity).
  • Exploit compiler-supplied hooks to insert your own accounting code when each function is entered or exited. Many compilers provide a means to do this – some profilers are implemented using such a mechanism.
  • Sample the program counter; interrupt your program periodically in a debugger to see where control is. This is harder in multithreaded programs and is a very slow, manual approach. If you have control over the execution environment, you can write scaffolding to automate this kind of test; effectively writing your own form of profiler.
  • Test an individual function’s impact on the total program execution time by making it slower. If you suspect that a particular function is causing a slowdown, try replacing its call with two calls in succession, and measure how it affects execution time. (This won’t necessarily make the function run twice as slow. File system buffers or CPU memory caches can enhance the performance of repeated code sections. Treat this as a very rough guide – more qualitative than quantitative.) If the program takes 10 percent longer to run, then the function consumes approximately 10 percent of execution time. Use this as a very basic timing test. When profiling, make sure that you use realistic input data, simulating Real World events. The way your code executes may be drastically affected by the kind of input you feed it or by the way it is driven, so make sure that you provide true representative input sets. If possible, capture a set of real input data from a live system.

Try profiling several different data sets, to see what difference this makes. Select a very basic set, a heavy use set, and a number of general use sets. This will prevent you from optimizing for the particular quirks of one input data set.

Select profiling test data carefully to represent Real World program use. Otherwise, you might optimize parts of the program that are not normally run.

While a profiler (or equivalent) is a good starting point to choose optimization targets, you can easily miss quite fundamental problems. The profiler only shows how the code in the current design executes – and encourages you to perform code-level improvement only. Look at larger design issues, too. The lack of performance may not be due to a single function, but a more pervasive design flaw. If it is, then you’ll have to work harder to remedy the problem. This shows how important it is to get the initial code design right, with knowledge of established performance requirements.

Don’t rely solely on a profiler to find the causes of program inefficiency – you might miss important problems.

Having completed this step, you’ve found the areas of your code where a performance improvement will have the most benefit. Now it’s time to attack them.

Testing the code

We recognized three testing phases in the optimization procedure. For each piece of code targeted, we test:

  • Its performance before optimization
  • That it still works correctly once optimized
  • Its performance after optimization

Programmers often forget the second check: that the optimized code still works correctly in all possible situations. It’s easy to check the normal mode of operation, but it’s not in our nature to test each and every rare case. This can be the cause of weird bugs late in the day, so be very rigorous about this.

You must measure the code’s performance before and after modification to make sure that you have made a real difference – and to make sure that it is a change for the better; sometimes an ‘optimization’ can be an unwitting pessimization. You can perform these timing tests with your profiler or by inserting timing instrumentation by hand.

Never try to optimize code without performing some kind of before and after measurement.

These are some very important things to think about when running your timing tests:

  • Run both the before and after tests with exactly the same set of input data so that you’re testing exactly the same thing. Otherwise, your tests are meaningless; you’re not comparing apples to apples. An automated test suite is best – with the same kind of live representative data we used in the profiling step.
  • Run all tests under identical prevailing conditions, so that factors like the CPU load or amount of free memory don’t affect your measurements.
  • Ensure that your tests don’t rely on user input. Humans can cause timings to fluctuate wildly. Automate every possible aspect of the test procedure.

Optimizing the code

We’ll investigate some specific optimization techniques later. Speed-ups vary from the simple refactoring of small sections of code to more serious design-level alterations. The trick is to optimize without totally destroying the code.

Determine how many different ways exist to optimize the identified code, and pick the best. Only perform one change at a time; it’s less risky, and you’ll have a better idea of what improved performance the most. Sometimes it’s the least expected things that have the most significant optimization effects.

After optimization

Don’t forget to benchmark the optimized code, to prove that you’ve made a successful modification. If an optimization is unsuccessful, remove it. Back out your changes. This is where a source control system is useful, helping you to revert to the previous code version. Also remove the slightly successful optimizations. Prefer clear code to modest optimizations (unless you’re absolutely desperate for an improvement, and there are no other avenues to explore).

Next time…

So, this is a scaffold for optimsation. A manifesto for change. In the final installment, we will look at specific code-level optimization techniques.


[1] Barry Boehm (1987) ‘Improving Software Productivity’ Computer 20:9, published by IEEE Computer Society, 1 September 1987.

Pete Goodliffe is a programmer who never stays at the same place in the software food chain. He has a passion for curry and doesn’t wear shoes.

Your Privacy

By clicking "Accept All Cookies" you agree ACCU can store cookies on your device and disclose information in accordance with our Privacy Policy and Cookie Policy.

By clicking "Share IP Address" you agree ACCU can forward your IP address to third-party sites to enhance the information presented on the site, and that these sites may store cookies on your device.