Resource lifetime management can be problematic. Martin Janzen reminds us how important destructors are and when to be mindful of their limitations.
Most experienced C++ programmers will agree that one of the best properties of our language is the ability to manage object lifecycles using constructors and destructors.
Bjarne Stroustrup [Stroustrup19] has described
dtor pairs as one of C++’s most elegant features, giving us the ability to create clean types which tidy up after themselves, with predictable performance, minimal overhead, and no need for garbage collection.
In this year’s ACCU Conference Lightning Talks, Nico Josuttis singled out destructors as (spoiler alert!) “the most important C++ feature” [Josuttis23]; and Wiktor Klonowski told a sad tale of time wasted debugging a .NET program that kept running out of ports, a fate which could have been avoided by the use of
At the same conference, as well as at the recent C++ On Sea, numerous speakers talked about C++ and safety, a subject that’s been very much in the news recently [NSA22], with C++ predictably receiving a lot of flak for the ease with which one can write code containing buffer overflows, memory leaks, and of course a rich and varied choice of ways to introduce undefined behaviour (UB).
License to Kill
In its favour, though, C++ also provides at least one way in which we can improve safety, and reliability, greatly, by use of the powerful RAII (Resource Acquisition is Initialisation) idiom: taking ownership of a resource in the ctor, then releasing it in the
If we ensure that all of our program’s resources are managed via RAII-based classes, it becomes fairly straightforward to avoid resource leaks [Core23]. Memory is freed automatically, mutexes unlocked, threads joined, database connections released, files and sockets closed, and so on.
Furthermore, this approach makes it much easier to write code which is exception-safe, because RAII-based resource management classes can ensure that every newly-acquired resource is released if a scope is exited because of a thrown exception.
In many cases we don’t even need to write the RAII code ourselves:
- Memory can be owned via a
- Mutex locks can be managed by
std::lock_guardand its variants.
std::jthreadcan often eliminate the need to write a custom thread guard class, as in [Williams19].
Of course, all of this works because the C++ language promises us that the destructor will be called exactly once, when the lifetime of an object ends.
To review, this happens:
- at the end of a full expression, for temporary objects,
- at the end of a scope, for automatic (stack-based) objects, either normally or when the stack is unwound due to an exception,
- on thread exit, for thread-local objects,
- on program exit, for objects with static storage duration, or
- when the
dtoris called directly, by using a delete expression or via a direct call when using placement new, or via an allocator’s
destroy()function. (In most cases, though, direct calls should be reserved for RAII classes and library code.)
So, job done; our resource management headaches are solved. What can possibly go wrong?
No Time to Die
Unfortunately, destructors are not always called exactly once.
First, let’s look at some situations in which an object’s
dtor may not be called at all.
Sometimes this may be due to factors which are entirely beyond our control, causing our program to terminate without any warning or recourse:
- Power failures and hardware faults can put a stop to things.
- Finite resources such as memory can become exhausted, even if we are managing them correctly.
- In a POSIX-like environment1, an uncaught signal may terminate our process immediately.
SIGKILL(-9), in particular, cannot be caught.
- The last two may occur together – as when Linux decides that the system is dangerously low on memory and its out-of-memory killer starts getting rid of particularly greedy processes.
In other cases, it may be due to a software bug:
- When not using RAII, it’s easy to forget to delete an object.
- Even if a resource manager such as
std::shared_ptris used, it is possible to create two or more objects which hold shared pointers to each other, creating a cyclic graph which prevents any of the objects from being destroyed automatically.
- An uncaught exception will cause a call to
std::terminate()and, by default, to
std::abort(). (More on that later.)
- UB. As the name suggests, pretty much anything can happen next.
Then, we have the halting problem.
No, not that one [Turing37]. I’m concerned here with the way in which we exit from a C++ program.
For many programs, such as command-line utilities, this is obvious: simply exit from the
main() function when finished, either by returning an exit code or just falling off the end.
However, other programs are meant to run for indeterminate periods of time. Software with a graphical user interface is normally started by its user, and runs until asked to quit. Server-based software, from system daemons to web servers to trading systems, is usually started and stopped by a controller such as init or systemd, or by some sort of task manager or framework. For these cases, the C++ Standard Library provides a number of functions that will stop the current process, with varying degrees of speed and grace.
The first one which comes to mind will likely be
std::exit(). It sounds like just the thing, doesn’t it? But any C++ programmer should not be surprised to find that it’s not that simple.
This came to my attention in a recent conversation with a colleague [McGuiness23] who was unhappy about the presence of a
std::exit() call in a code base he was reviewing. When asked why, he explained that while this would call the destructors for static and thread-local objects, it would not call the
dtors for automatic variables. This sounded surprising to me, but after a bit of digging on cppreference.com and in the C++ standard, I found that this is in fact the case.
But why would
std::exit() ignore the dtors for automatic variables? It turns out that, for a normal exit in which the program returns from
main(), there won’t be any. Returning from the
main() function has the effect of ending its scope, causing objects with automatic storage duration to be destroyed. This is followed by an implicit call to
std::exit(), which destroys the remaining static objects and terminates the program.
So, what happens if
std::exit() is called elsewhere in the program? Does it matter that some automatic objects’ dtors may not called on exit?
Often it does not. If the program is running under any of the usual operating systems, the OS will reclaim memory used by the process, close files and sockets automatically, and so on. If functions higher up in the call stack have existing automatic variables which own these resources, the fact that their dtors are not called may not make any difference at all.
However, it is dangerous to assume that this is the case – or that it will remain so in the future. If the program in question is large enough and complex enough, and if it has even a small team of developers all making changes to it, we are leaving ourselves open to some very subtle and intermittent bugs.
Most obviously, if the program has acquired resources which are not cleaned up automatically by the operating system – think of temporary files, System V IPC structures, GUI objects, database sessions, hardware devices, open orders, or worse – then this can cause a resource leak which is extremely hard to track down, especially if we believe that we have cleverly ruled out this possibility by wrapping our resource with a nice RAII manager.
Also, most of us will have run into shutdown errors, in which a program works perfectly well until it is time to stop, but then comes to an undignified end, perhaps leaving behind a corefile or a set of disturbing log messages. Often this is caused by code which expects that objects will be destroyed in a particular order, and thus it is safe for one object’s dtor to refer to another object that is still presumed to exist. (Data structures representing complex graphs are good candidates for this; ordering of data members also can be a culprit.) If we have a dependency graph containing a mixture of objects with automatic and static storage duration, then
std::exit() may alter the usual destruction order, with unfortunate results.
What can we do about all this? The C++ standard library does offer a number of other exit functions; Table 1 is a summary of information from cppreference.com.
Note 1. The last two columns refer to the lists of functions registered using
Note 2. The
Note 3. On POSIX-like systems it is also possible to terminate a C++ program by calling C library functions such as
Note 4. For sake of comparison, the last two rows show the behaviour of two gcc compiler intrinsics.
Looking at the ‘auto’ column, it’s clear that none of these functions will cause the stack to be unwound and
dtors for objects with automatic storage duration to be executed.
Therefore, the only way to ensure a clean exit is to not call
std::exit() and its friends at all, but to ensure that the program always returns from
This may seem impractical in a complex program in which the decision to exit is made far down the call stack. However, if the program is single-threaded then a simple solution is to throw an exception that propagates all the way back to
main(), where it is caught and converted to a return. We might choose to throw an exception type that is not derived from
std::exception in order to avoid inadvertent catches on the way up, but to still allow catch/rethrow by objects which must do something specific during shutdown.
In a multi-threaded program this is trickier. A thread which throws an uncaught exception will terminate the entire program – and the default
std::abort(), which doesn’t call any
dtors at all. A different approach is required, possibly using
std::future to return the exception to the main thread, as well as some means of stopping and joining other running threads before returning from
main(). The details are well outside the scope of this article, but see [Williams19], as well as later C++20/23 changes such as
Last but not least, consider that where external resources are involved, when the program is restarted it may be wise to ensure that those resources are in fact in a known and usable state; that they have not been left in a bad state by an earlier unclean exit. This is highly application-dependent, and may be much easier said than done.
Die Another Day
As if all of that wasn’t bad enough, let’s consider a number of situations in which a program can attempt to destroy an object more than once:
deletewith a pointer to an object that was not created by an earlier call to new (generally caused by calling delete twice with the same pointer value, with no intervening new) is UB; but in practice it’s likely to take the form of a second call to an already-destroyed object’s
- A similar duplicate call to delete can occur if two or more instances of a
std::shared_ptrare created, all of which point to the same object, because the separate
std::shared_ptrinstances have distinct reference counts.
- A duplicate
dtorcall may also occur due to an error in the move
moveassignment operator of a resource manager class, if the pointer (or other resource handle) in the moved-from instance isn’t set to null, or otherwise made to give up ownership.
- The same error can occur in a copy
ctoror copy assignment operator – though [Müller19] points out that a resource manager class should be move-only, and so these functions arguably should have been deleted in the first place.
dtormay be called explicitly, with
p->~T(), which is fine when destroying an object created via placement new – but not if a bug causes this to happen twice for the same pointer value.
- As usual, any UB could conceivably manifest itself as a duplicate
Fortunately, these are all software errors, and should therefore be preventable, or at least debuggable if they do occur.
A duplicate delete used to be a very hard problem to track down. If you were lucky, the program would crash immediately and leave a nice corefile to help with debugging. If not, the symptoms might not appear until much later, when there would be almost no chance of spotting the original error.
Today, though, we are fortunate to have lots of help. Most C++ compilers now provide sanitizers such as ASAN and UBSAN which detect most such errors, and produce very detailed reports showing where the duplicate dtor call occurred, where the object was initially created, and where it was first deleted. There’s no excuse for not taking advantage of these wonderful tools.
Static code analyzers are becoming very smart as well, though I’m not yet aware of one which can spot this sort of error at compile time. (I’d be delighted to be corrected on that.)
You Only Live Twice
Finally, I’d like to point out one other scenario in which our
dtors can surprise us by being called more than once.
In a POSIX-like environment, a process can call
fork() to create a new copy of itself, resulting in a running parent and child process. The child process inherits a copy of the parent’s memory space, as well as a number of operating system structures, such as the list of open file descriptors.
This can be used to implement concurrent processing – though today other mechanisms such as threads (or, preferably, higher-level structures based on threads) and parallel algorithms offer better ways to achieve concurrency.
More commonly, the
fork() call is used in conjunction with
exec() in order to launch an entirely separate program; perhaps an existing utility program whose services our parent process wants to use, or an interactive program which the parent can control via stdin/stdout or another IPC mechanism.
In this case it’s good practice for the child process to close all file descriptors (fds) it inherited from the parent, then call
exec(). If successful,
exec() overlays the child process image with that of the new executable, and starts the new program running.
But what if
exec() is not successful?
This might happen if the executable which was meant to replace the child process cannot be not found, or is unavailable due to file permissions or other restrictions. In this case, both the original parent and child processes continue to execute.
So, what can the child process do now? It is an exact copy of the parent, which means that its memory contains copies of the instances of each of the parent’s objects. If the child process exits normally, via
main(), all dtors for existing objects will be invoked!
dtors simply free up allocated memory, that may be all right, as the memory being freed is a copy of the parent’s memory. If they are associated with buffered streams, such as
stdout, there may be some confusion as anything in the buffer at the time of the
fork() call will be displayed twice.
However, if any child
dtors free resources that exist outside of the program, this will not end well. Those resources will suddenly become unavailable to the parent – and when the parent process exits, it will attempt to free the resources again, with unpredictable consequences. The only safe thing for the child process to do at this point is to exit, and to exit in a way which guarantees that absolutely no
dtors are invoked (and no registered
at_quick_exit() functions either).
Checking the table above, there’s only one function which will do the
job, and that’s
std::_Exit(). This calls no
dtors at all, nor any of the registered functions – exactly what we need. The child process simply disappears, leaving the parent process’s objects intact.
You can try this for yourself, at [Janzen23].
C++ destructors are a powerful tool; one which we tend to take for granted, both for better and for worse. We should try to make the most of them, using RAII wherever possible to protect against the sorts of resource management problems which bedevil so many other languages.
At the same time, we need to be mindful of their limitations – to understand how they work and how they can fail.
I’ll close with a few recommendations:
- Always try to exit your programs gracefully; that is, by returning from
- Avoid trying to exit a process from within application code, and certainly from within library code.
- When implementing a fork/exec, always have the child process call
[Core23] C++ Core Guidelines, “E.6: Use RAII to prevent leaks”, https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#e6-use-raii-to-prevent-leaks
[Janzen23] Compiler Explorer demo, https://godbolt.org/z/YPonvWq7a
[Josuttis23] Nico Josuttis, ACCU 2023 Lightning Talk, “The Most Important C++ Feature”, https://www.youtube.com/watch?v=rt3YMOKa0TI
[Klonowski23] Wiktor Klonowski, ACCU 2023 Lightning Talk, “’Huzzah!’ for destructors in C++”, https://www.youtube.com/watch?v=0WmriNuQu60
[Müller19] Jonathan Müller’s blog, 2019-02-26, https://www.foonathan.net/2019/02/special-member-functions
[NSA22] National Security Agency, Press Release, 2022-11-10, https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/3215760/nsa-releases-guidance-on-how-to-protect-against-software-memory-safety-issues
[Stroustrup19] Lex Fridman Podcast, 2019-11-07, https://www.youtube.com/watch?v=LlZWqkCMdfk
[Turing37] Alan Turing, “On Computable Numbers, With an Application to the Entscheidungsproblem”, 1937, https://turingarchive.kings.cam.ac.uk/publications-lectures-and-talks-amtb/amt-b-12
Credits / Apologies
All movie titles are trademarks of EON Productions Limited, and are used for educational purposes only, under the ‘fair dealing’ exceptions to UK copyright law.
- For this article I’m assuming a POSIX-like environment, simply because that is what I know. Windows developers should have little difficulty finding equivalents in their own world.
has enjoyed writing code for hire since before the IBM PC or Apple ][; and C++ since, well, ‘Nevermind’. After early adventures in telecomms and digital TV, he’s ended up in the City of London writing financial software, as one does.