What I Want For Christmas
As some of you know, I've had an eventful year, and this seems like a suitable opportunity to reflect upon those aspects of it that relate to software development. Things are always changing and there is often a pattern but one sometimes needs to step back a bit to see it. This time I'm going to take a step back thirty odd years and describe the way that software development happened then.
When I first started writing programs I'd write them out on paper, then when I'd checked it over I'd transcribe each line by punching holes into a piece of card (very carefully - most errors required starting the card again). When I had completed all the cards I'd use several elastic bands to ensure that the stack of cards was secured in order and place it into a tray with other programs in the same format. Later that day the tray would be carried to a local computer centre and the programs transferred to other trays, a card reader and eventually united with corresponding printouts and placed into a tray awaiting transfer back from the computer centre. If I timed things right a program could be turned around updated and resubmitted a second time in the same day!
The effect of this was that a lot of the coding of a program was concentrated into a few intensive ten-minute intervals separated by hours of suspense. All too often this meant that a mistake was made in the rush, but not noticed until the stack of cards had begun its tortuous journey to the computer centre and back again. It may sound horribly inefficient to the current generation but programs were really developed this way. Nowadays errors that would not have been reported by the compiler for half a day or more are highlighted on the screen before I even save the file!
Although people spent a lot of effort trying to make the "dead time" more effective, any attempt to improve efficiency by working on several programs at once never really worked as well as might be hoped. The relentless cycle of turnarounds forced the development cycles into synchronisation, and as there was always one program that was more urgent than the rest it stole the time that the others needed. And, while I've focussed on writing code, it wasn't just getting the program to compile that was like this, testing and deployment followed similar processes.
But code got written and systems got delivered.
The underlying difference between the way things were then and the way things are now is the speed of feedback. Most readers will be familiar with development environments that highlight syntax errors as you type - problems that could once have led to days of delays and frustration are detected and corrected without conscious thought. Such a change doesn't only affect the speed of progress, it also changes the way that we approach the task. Even those readers without this facility will be working in an environment where it is more effective to use a compiler to check syntax than it is to do so "by hand".
Having reliable and immediate feedback available provides a level of confidence that allows the developer's attention to focus elsewhere. (This is just as well, because the effect of having better tools isn't that the job has got easier - the range of problems that we are willing to tackle has expanded to compensate.)
Naturally, there is much more to developing software than getting the syntax of the code right, and much of this is also dependent upon accuracy. And there are two approaches to accuracy: avoidance of error and correction of error. Each can be appropriate in the right circumstances and, as I have tried to illustrate in the context of coding, the choice can depend upon the tools available.
Traditionally, software development processes have been based around avoidance of errors: getting the requirements right and big up front design all comes from an era of slow, inefficient feedback. There is a significant cost to manually double and triple checking everything to reduce the errors being fed into a process. Automated error detection that provides early feedback and allows early correction is often much more effective. And, based on my experiences this year, I think that it is becoming available for many more aspects of software development.
The checking of individual units of development is the province of unit tests, and the ability to run these automatically as part of the development environment is just about there for some development technologies. For example, there are free JUnit "plug-ins" for most of the popular Java development environments. Having tests light up "green" (for success) or "red" (for failure) when changes are made can trap a lot of silly errors soon enough after they are made that they don't disrupt a developer's line of thought any more than the occasional compiler error. Of course, as yet, this isn't quite as widespread as syntax checking editors - for example, I don't know of a CppUnit plug-in for the Visual Studio environment my current client favours. So, number one on my "Christmas list" is the availability of such a tool for any environment that I happen to be working in.
Naturally, there are additional issues with the use of unit tests, such as persuading both developers and management of their usefulness. This can be a significant problem: there is a cost to both writing unit tests and to running them - and they do not detect all errors. Much as I would like to I cannot point to scientific comparisons between "equivalent" projects run with and without unit tests that demonstrate the benefits. All I can give is anecdotal evidence that the projects on which I've been able to instil a culture of unit testing have had far fewer problems when it came to integration and delivery. (But unit tests are far from the only change that I've introduced - and projects can be delivered successfully without them.)
The one thing that I can say about having unit tests in place is that the level of rework is much lower. As one developer put it: "it is a pain writing these unit tests - but I like getting things right first time". But it isn't as simple as that: things are not always "right first time" - sometimes the requirements have been misunderstood (or have changed: not only can the business change, but the process of capturing requirements can question assumptions, and delivering a software system can offer unexpected alternative approaches).
While there have been attempts to catalogue and collate development practices that work there is very little convincing evidence for many of the things that I would like to believe. Of course, when working with like-minded individuals this isn't an issue (credible claims require little evidence), but when trying to justify and motivate change, it can be a major problem. When talking to management and developers who believe that standardisation of process, or a new technology, or some other "magic bullet" is the answer to all their development woes then any claims I make will not be considered credible without substantial evidence. So that is the next item on my list: citable evidence of the effectiveness and applicability of alternative development practices.
Some time ago I came across one of Ward Cunningham's innovations: "Fit". Fit is a Java framework for describing system functionality as a web page that can be executed against the system under development. It requires the developers to write some lightweight "fixture" classes that map the requirements embedded in the web page to interactions with the system. The fact that the requirements can be executed directly does a lot to address the ambiguities that frequently find their way into the testing of functional requirements.
More recently (at The Extreme Tuesday Club) I came across some work that builds upon the Fit framework. "Fitnesse", produced by ObjectMentor, is a Wiki implemented around the Fit framework that facilitates the capture of functional requirements in an executable form: as Fit webpages. Michael Feathers (of ObjectMentor) has also produced FitCpp - a C++ implementation of the Fit framework. (There are some bugs and other issues to resolve with FitCpp but I've been working with it (and Fitnesse) for my current client and, assuming I get suitable permissions, I will have put the resulting material on my website by the time you read this editorial.)
One of the great things about this approach is that there is very easy visibility of project process. One may set up a summary webpage that lists all the functional tests, colour coded according to whether the functionality is available (green), is failing (red) or has yet to be addressed (grey). Because executing the tests directly against the system produces these results the feedback is always immediate, up to date and honest (which avoids the temptation to exaggerate progress - both to oneself and to others).
It is easy to overlook what this means to people outside the development group. All too often their experience of software development resembles the coding process I described above: concentrated effort at the beginning with lots of effort invested in getting it right, followed by things being "out of their hands" for a long period before the results are visible. It is only then that mistakes, ambiguities and misunderstandings become apparent. Publishing the current state of development on the intranet gives them much needed feedback early in the development cycle. And, because it is a Wiki, it is simple for the requirements to be updated. And because the requirements are the functional tests these too are maintained in a single, authoritative, place.
Fitnesse demonstrates that it is possible to bring requirements capture and functional testing much closer together than has ever been my experience in the past. This (or something like it but better) should be part of the toolkit on any project. Another one for my Christmas list!
It has been a few years since Martin Fowler codified a number of coding practices that experienced developers know are needed but are hard to associate with a quantifiable benefit. These "refactorings" are transformations that leave the functionality unchanged but make the structure of the code more amenable to further development. In the Java world there is now widespread support for automating these transformations.
These facilities are great: it doesn't sound much but, to take one example, being able to remove a block of code from a growing method body by selecting it, choosing "extract method" from the menu and then entering the method name is so much simpler than the "old way". The developer is freed from the tedium and mistakes of copying the code, changing the indentation, working out what the parameters need to be and what the return type needs to be (and occasionally discovering that there are subtle reasons why the code cannot be moved after all).
I've yet to encounter corresponding support for C++ developers - which is understandable (both in its compilation model and its syntax C++ is a much harder language to address than Java). But this is my list and I see no reason to be reasonable in my demands: these facilities are great and I don't want C++ to be left out.