Sometimes things grind to a halt. Frances Buontempo reminds us we cannot be productive every minute of the day and that downtime is important.
My contract has come to an end, and I haven’t lined anything else up. I’m in the privileged position of having some savings and my husband has a job, so the lack of income on my part hopefully won’t be a problem for now. This will give me a chance to catch up on several tasks, which will be useful. I might even get a chance to do something different once in a while, like go for a long walk. My head is spinning with all the incomplete jobs and half-baked ideas I’ve started on, but not finished. Of course, this means I haven’t got round to writing an editorial so, yet again, I apologise.
I had such plans for my first day off, but ended up spending hours watching a new phone trying to transfer everything from my old phone, so as usual I spent hours staring at a screen. I then failed to appear on a pod-cast, since a host couldn’t make it. By the end of the day, I felt as though I’d done nothing, which is an all too common state of affairs. In the time sitting around waiting, I did manage to start thinking about how to organise my time and what to prioritise. The day seemed like a buffering day, both as a space between the old and the new, and as a place to line plans up for the future. Sometimes, stopping and seemingly doing nothing is actually much more important than randomly doing a variety of things just because they spring to mind. Have you ever gone in one room to do one thing, got distracted and done something completely different? Almost certainly. Or opened a file in a code base to add a log line, and refactored some horror you found without adding what was needed? Then spent an hour or more waiting for the new log line to appear before realizing your oversight? Easily done.
Rather than running your brain at 100% CPU usage, running around doing 100s of things you didn’t mean to do and forgetting the important tasks, you might go into a room and freeze instead, having forgotten why you went there in the first place. Either way, the important work doesn’t get done, so the outcome is the same. One looks like frantic buffering, while the other appears frozen. Nothing happening and lots of things happening can have the same outcome. In fact, sometimes, they look very similar. How can you tell if a program is really doing something? It may show high CPU utilization, but that can happen if code is stuck in a loop, calculating the same thing over and over. In a previous role, I had to be on overnight support from time to time. Our team ran various finance simulations overnight which needed to be ready for 9 a.m. the following morning. It was often touch and go as to whether we’d be on time or not. One job in particular often took a long time, and I was called in the middle of the night and asked to bounce the job because it had got stuck. How could we tell it was stuck? It was hammering the CPU, but we couldn’t see any logs, so what, if anything, was it doing? I bowed to pressure, and restarted the job. It got to the same point and still didn’t appear to be doing anything. This time, when the inevitable call came, I refused to restart it, and it did finish with a couple of minutes to spare. The job had not frozen. It was lining up lots of calculations and they took a long time. Unfortunately, there was no way to tell from the outside whether it was doing anything or not. A spot of judicious logging in the right places helped in the long run, as well as optimizing the code where possible.
Many situations have no visible progress, not just an overnight job appearing to be stuck. The same can happen on software projects. I’ve picked up a few Jiras that have spilled over several sprints. Sometimes, the person who wrote the task did a code review and announced “One more thing” We called him Columbo, for reasons that are obvious if you’ve ever watched the show [Columbo]. Other times, far more foundational changes were required so every time you think you’re done, you have to update, merge, retest, fix, rinse and repeat. Like running on the spot for several, ahem, sprints. Often, abandoning the task and finding a way to make the change in smaller steps is better, but we tend to get determined or bullied into completing something once we’ve started. Making fundamental changes can take a long time, and there may be no visible changes for a while. That doesn’t meant no progress has been made: we just can’t see the internal improvements from the front end. I guess some kind of code metrics can help here, provided the non-coders on a team understand what they mean and why they are important. Recently, McKinsey produced a report about measuring developer productivity [McKinsey23] McKinsey are a large management consultancy who regularly publish reports on a variety of subjects, which tend to carry weight and influence many companies worldwide. The report starts by pointing out:
There is no denying that measuring developer productivity is difficult. Other functions can be measured reasonably well, some even with just a single metric; whereas in software development, the link between inputs and outputs is considerably less clear.
They mention Google’s DevOps Research and Assessment (DORA) metrics [Google], along with SPACE metrics (Satisfaction and well-being, Performance, Activity, Communication and collaboration, and Efficiency and flow – which is a bit of a mouthful!) [Forsgren21]. Their report builds and extends on these ideas, but doesn’t really say anything I find useful. I have seen several responses to the report. For example, Kent Beck told LinkedIn the report is naïve, but found McKinsey thinking their intended market want a report like this is interesting in and of itself [Beck]. Gergely Orosz and Kent Beck have written a more detailed analysis [Orosz23], questioning some of the measures such as effort. Now, I go to the gym, and have to put in a huge effort to curl 7kg dumb-bells. I watch other people using 10kg weights, and making it look effortless. Does than make me more productive? No, I’m just not as good as them. Maybe as I keep practising, I’ll get better and be able to lift more. In the meantime, there won’t be any visible progress.
Programming isn’t the only place where it’s hard to measure progress. If you’ve ever had work done on your house, you will know this. Recently, a small part of a boundary wall fell over into the neighbour’s garden. We found a builder, and he was happy to reuse the bricks, after cleaning them up. He hadn’t factored in how long that would take. It turns out ivy can be very destructive and grow through almost anything. It required a huge effort to untangle the mess, and then considerably more digging than envisaged to get the roots out so a new foundation could be laid. For many days, it looked as though nothing more had happened than a pile of bricks had moved from one spot to another. The builder couldn’t be precise about how much longer would be needed, which is understandable. He’s never rebuilt this wall before, so couldn’t be sure. Now he’s spent time getting the ground cleared for firm foundations, he’s making visible progress. Writing code can be like that too. If you’ve never coded a specific algorithm or solved a particular problem before, you can’t tell how long it will take. You can say what you’re up to at the moment and what other tasks will need doing, but you won’t know the unknown unknowns. They are, after all, unknown. Furthermore, progress is often non-linear. If you break work down into, say, five chunks, and the first takes all of Monday, that is no guarantee you’ll be done by the end of Friday. As for clearing the ground to build firm foundations, how many of us have had to justify “no visible progress” and explain “tech debt” on more than one occasion?
The wall is nearly finished now, so our neighbour will be able to let their dog down the end of the garden again. Without the barrier, he was concerned the dog could stray into our garden, and I’m sure our cat might have opinions about that. The dog could probably jump over the wall if it wanted to, but the boundary seems to form a psychological barrier too. For the dog. The cat does what he wants, including wandering into neighbours’ gardens and sitting on my seat. When the wall is rebuilt, I will try to clear up more of the ivy round the garden. Having a buffer zone between the wall and the plants to avoid a repeat of the collapse might be a good idea. Buffer zones give space to see what’s going on. I’ve tracked buffer overruns and similar by adding variables to the stack to pinpoint where my code was doing something daft. These “canary” variables were a simple but effective approach. There are better tools available nowadays, for example using The /GS flag in Visual Studio to enable buffer security checks [Microsoft21], and OWASP gives details on problems to watch out for and other tools that might help [OWASP].
The word ‘buffer’ means anything to reduce shock or damage due to contact, something that cushions against shock of fluctuations in finance or more generally a protective barrier, according to Merriam-Webster [Merriam-Webster]. Adding a buffer to protect a buffer seems recursive, which is a different problem. Of course, a software memory buffer is not about cushioning or protection, but rather a space to put things. We use a buffer to store user input or other temporary data. We also talk about a webpage or network traffic buffering. Data is stacked up, so it can be accessed quickly or stop lag on the receiving end. This buffering should be a good thing, but we also complain if a video stream or similar is buffering, meaning it has frozen waiting for the buffer to fill up. Sitting watching spinning wheels or stalled progress bars is very annoying. Whether we need to bounce a router, restart a job or just wait depends. Some things take time and we need to learn to be patient.
I’ve ground to a halt several times while trying to write this. My mind keeps wandering to my ever growing to-do list, while also day-dreaming about what I might be able to do with the spare time I now have. We all need time out occasionally, to give ourselves time to allow things to settle. Downtime is important. It may look like inactivity or stagnation from the outside, but buffering moments can lead to innovative sparks or changes of direction. This has got to be an improvement on keeping digging or being stuck in a rut. Let’s try to measure our ‘productivity’ in a positive way, without ending up striving for 100% CPU usage but no constructive outcomes. Failing that, certainly consider adding traces or logging to see what is going on. And try to make them more informative than Terry Pratchett’s computer Hex unhelpfully pronouncing “++?????++ Out of Cheese Error. Redo From Start.” [Discworld]:
[Beck] Kent Beck, published on LinkedIn: https://www.linkedin.com/posts/kentbeck_mckinsey-claims-its-possible-to-measure-activity-7099764438496407552-v8P2
[Columbo] Columbo: https://en.wikipedia.org/wiki/Columbo
[Discworld] ‘Hex’, published on Discworld Wiki, available at https://discworld.fandom.com/wiki/Hex
[Forsgren21] Nicole Forsgren, Margaret-Anne Storey, Chandra Maddila Thomas Zimmermann, Brian Houck and Jenna Butler ‘The SPACE of Developer Productivity’, acmqueue, Volume 19 Issue 1, 5 March 2021, available at: https://queue.acm.org/detail.cfm?id=3454124
[Google] DevOps: https://cloud.google.com/devops
[McKinsey23] ‘Yes, you can measure software developer productivity’, a collaboratively written article published 17 August 2023, available at: https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/yes-you-can-measure-software-developer-productivity
[Merriam-Webster] ‘buffer’, Merriam-Webster.com Dictionary, https://www.merriam-webster.com/dictionary/buffer
[Microsoft21] ‘/GS (Buffer Security Check’ posted 8 March 2021 and available at https://learn.microsoft.com/en-us/cpp/build/reference/gs-buffer-security-check
[Orosz23] Gergely Orosz and Kent Beck ‘Measuring developer productivity? A response to McKinsey’ in The Pragmatic Engineer, published at: https://newsletter.pragmaticengineer.com/p/measuring-developer-productivity
[OWASP] ‘Buffer Overflow, available at https://owasp.org/www-community/vulnerabilities/Buffer_Overflow
has a BA in Maths + Philosophy, an MSc in Pure Maths and a PhD using AI and data mining. She's written a book about machine learning: Genetic Algorithms and Machine Learning for Programmers. She has been a programmer since the 90s, and learnt to program by reading the manual for her Dad’s BBC model B machine.