How sure are you of something? Ric Parkin considers how we build models, and celebrates a milestone.
And welcome to Overload.....100! While the cynical mathematician in me knows that the significance of the number is just a coincidental artifact of the number of digits on a bilaterally symmetric semi-evolved simian's pentadactyl forelimbs, it's still a worthwhile moment to pause and look back at how we got here, and where we'll be going. Some good old fashioned navel-gazing in fact.
When did it all start? I must confess, having only come across ACCU back in 2000 or so, that I didn't know too much about its beginnings, and about Overload in particular. Fortunately, the internet (and some willing volunteers) have come to my rescue: we now have (almost) all the back issues on the website [ Overload ] (although many are currently only available as a pdf of the whole journal) so we can browse through and see what life was like back then.
The first Overload came out in April 1993, and an account of its genesis as a Special Interest Group of what was then called the C User's Group (UK) can be found in that first editorial, including the initial inspiration of teaching people about the new facilities in a new trendy language called C++. Many of the early writers are unfamiliar to me, but there are a few whose names are still associated with ACCU. Early editions were simple newsheets, with source code distributed on an accompanying floppy disk. Over the years things have moved on, with technology changes helping with better quality publishing, easier collaboration via email and other communication routes, and the hosting of documents and source code online. Sadly some things aren't so great - the ACCU journals are now quite rare in that we distribute print versions, with most computing magazines now being web only (is it just me, or do other people find online technical articles much harder to read, in small chunks at low dpi, and animated adverts flickering away? Perhaps better quality e-publishing via gadgets such as the Kindle and iPad may improve this) . The content has changed too, with Overload (and the wider ACCU) no longer being exclusively C++ focused, but now taking in other languages as well as project management, and even some philosophical musings.
As an aside, I noticed that this history has some parallels my own relationship with C++: having first come across it in 1993 when I had to maintain a DLL to allow access to a C library from a Pascal program, it became my main language for the next decade or so, and then I branched out to use a wider mix of languages and technologies as well as doing more project management.
So what of the future? Well, Overload is currently looking healthy, with a good stream of regular and occasional articles, which a great production team turns into a magazine that people really seem to be interested in reading. We'd always like new articles and writers though, and I have heard people saying that they've an idea but don't seem to have the time, or aren't sure people would be interested. I can reassure them that pretty much every idea is interesting in some way, so drop me an email and we can advise and help you get something into print ... With the upcoming new C++ standard there's plenty of great opportunities for article ideas, so get cracking and be part of the next 100!
Modelling the world
I mentioned that sometimes we have more philosophical articles, and this issue has an interesting one from Rafael Jay on the parallels between bug hunting and the scientific method. This generated plenty of comments from reviewers at how to extend the idea further, so I'm sure it'll inspire many of you equally. This is an area which has always fascinated me, and chimed in with some other thoughts I've had recently, especially after Bruce Schneier's talk at the ACCU security conference at Bletchley Park. [ Schneier ]
The basic idea I took away was that people can have security, and they can feel secure, but that the two were not necessarily as connected as you'd expect. For example, many airport security measures, such as restricting what can be taken in hand luggage, don't actually make you significantly safer but are really there to reassure you because you can see that Something Is Being Done (although paradoxically the extra attention can play on your fears and make you feel less safe too...) On the other hand, you might feel perfectly safe in your car because you're in control, and yet the chances of being injured or killed in an accident is much higher than the plane. He described this in terms of there being what you Feel , and Reality , and you should be aware of the differences when evaluating a security response. He also added a third element, a Model , which is what you use to try to understand the Reality part when it gets complicated. Ideally your Model should reliably reflect Reality (at least for the questions you're asking of it), but sometimes it can get out of sync, especially if the Reality changes and you don't update the model. For example, say you've forced your system to insist on changing passwords every month. Your Model tells you that that limits an attacker's window of exploitation if they crack a password, and you Feel secure. But after a couple of months people have got fed up of forgetting their new passwords, and have settled on 'Password1', followed by 'Password2' etc... Reality has just changed but your Model no longer reflects it and you have a false sense of security.
I was intrigued by this Reality-Model-Feel separation, and realised you can apply it in many other situations. For example in politics, where many policy decisions may be done because of an underlying Model (or ideology, which may or may not reflect Reality !), but the presentation is often about manipulating the audiences Feel appropriately. User Interface design has related ideas too - a good UI should induce a user to have a particular mental Model that reflects the task that they are trying to do. The underlying Reality of how this task is achieved may be very different, but if you get the UI Model to mirror what they're trying to do, you'll have a good UI.
Closer to home, I recognised from Rafael's article how debugging is often about looking past your initial Feel about some code ('Of course this loop works - we've tried it before!'), building a new Model ('How many times does this loop really run in this case?'), and testing that against Reality to determine where the bug actually is ('Ooops, negative count passed in'). And as his article title suggests, the Scientific Method has many similar features, both in how it ought to work - by checking the results of your Model against Reality , you can check to see how accurate it is, and adjust it. And you can also use your Feel model to suggest possible improvements and checks to make sure your Model isn't resting on flawed assumptions, and suggest refinements and sometimes a complete rethink (although in practice, this happens very rarely in science as most models are already pretty good and just get adjusted. Plate Tectonics is one notable place where a true revolution occurred)
Sadly it can also be subverted. I've recently been reading Bad Science [ Goldacre ] and have found it troubling how badly science is reported in the media, and how our Feel model can be manipulated to not reflect reality, whether deliberately or just through misplaced hope or fear. The MMR vaccination scare is a classic case - while there was a large amount of evidence for its safety from this country and others, a very small scale study that found a borderline statistically significant correlation (ie it could well have been chance) was portrayed as a serious risk, which understandably concerned people. For some reason, many people just refused to believe any of the reassurances and further studies that demonstrated the safety - perhaps on the precautionary principle as there were children involved, or perhaps felt the concreteness of choosing to having the vaccine was too scary, whereas the abstract (and yet higher) risk of not having it wasn't. The problem is that everyone quite rightly uses their Feel model to quickly come to a conclusion, but if that model is not justified then you might get things wrong, and it can be very hard to change your initial gut feel. A striking example from Bletchley Park was that the Germans suspected that the Allies were getting intelligence on their activities, but were so certain that their codes were unbreakable that they never seriously contemplated that possibility.
Most of us do not have the time or expertise to check things out so rely on others to do so for us. The trouble is, who do you trust? People can be wrong for all sorts of reasons. An amusing example where common knowledge turns out to be just garbled tradition turned up on QI the other day [ QI ]: everyone knows that you should not drink alcohol when taking antibiotics, but why? I'd always thought it stops them working, but apparently that's not true (although with some it'd cause unpleasant side-effects): QI's answer was that one of the first major uses was to treat syphilis, especially in soldiers. But as people would still be infectious for a while after they started treatment, they were told not to drink to avoid them going out to celebrate with reduced inhibitions, which may cause one thing leading to another, so spreading the infection. And the advice stuck. (I thought I'd better do some research to see if this is plausible, and apparently it could well be true [ Alcohol ])
I've also been reading Merchants Of Doubt [ MoD ] which is much more troubling - this documents cases where people's Feel models are manipulated to discount evidence backing up a very different conclusion, whether for ideological or financial reasons (or just people being stubborn about not changing their own Feel model), often using the idea of a fair and balanced debate as a way of airing very minority views as if they were as well supported as 'the other side'. This way well be a reasonable thing to do in politics, but in science we can check our models to see how good they are so things are not just a matter of opinion. An old example was the campaign to cast doubt on the evidence that smoking was a major contributor to lung cancer. I doubt there are many people left who seriously disagree with that any more, but it took 30 odd years to get there, mainly because people were encouraged to think that 'the science isn't settled', or it wasn't 'proven'. Which of course science can never do, as it deals with finding models that are useful, but can always be improved as more evidence comes to light. This sort of tactic works very well in areas such as medicine where you are dealing with things that are highly complicated and probabilistic, as it plays on peoples desire for things being definitely one way or another. The book does cover more recent examples, some of which are still 'controversial', and yet the parallels are striking. It can be very hard to avoid prejudices, mistakes and misdirection (including your own) to build a good model that lets you come to a reliable conclusion, but I think it's something we should strive for.