Some things go up and up, while others go up and down. Frances Buontempo considers whether the distinction matters and how to spot the difference.
The news in the UK moved on from claiming everything is under pressure, as I mention in the last Overload [Buontempo23a], to telling us we’re at breaking point. Then we ran out of tomatoes. This is a grand distraction from a variety of important issues, and furthermore has sidetracked me completely from writing an editorial, which is a shame. We will produce extra copies of this edition of Overload to hand out at the ACCU conference in Bristol this year, so now would have been a good time to write an editorial. Hello to attendees reading this. For those who have not read Overload before, I have a track record of failing to write editorials, so this situation is entirely expected by our regular readers. It’s therefore very easy to predict whether I will write an editorial or not. Most situations in life are more difficult to predict, though. Things come and go. Prices or even empires rise and fall. Past performance is no indicator of future results. Of course, this makes a mockery of any attempts at statistics or data science. Both disciplines tend to rely on previous results and values to form a model based on patterns. If we assume the future will be unlike the past, we are in effect saying there is no point in making such models.
The philosopher David Hume grappled with this issue. He suggested that trying to predict the future relies on moving from specific observations to general principles, so is using induction or inductive inference. This hinges on an assumption of what is sometimes referred to as the uniformity principle: “The future will be like the past” [Henderson22]. He [Hume48] claims:
That the sun will not rise tomorrow is no less intelligible a proposition, and implies no more contradiction, than the affirmation, that it will rise.
We assume unchanging laws govern the universe, and extrapolate from there. In fact, someone, maybe Einstein, once said:
Insanity is doing the same thing over and over again and expecting different results.
Whether or not the quote is ascribed correctly, we might expect calling the same function to give the same results each time, with some caveats. We might describe such a function as idempotent. We can call it twice and expect the same results. Not all functions behave like this, C’s
rand being the first that springs to mind. Running a program using
rand twice and getting exactly the same results is not impossible, and confuses coders used to some other languages. As soon as you seed the random number generator, though, you will get a different sequence of numbers. Some languages seed the random number generator for you, but some don’t, and this can be a source of confusion. That we can generate sequences of numbers that appear to be random is quite an achievement. John von Neumann once said:
Anyone who attempts to generate random numbers by deterministic means is, of course, living in a state of sin.
The majority of the ‘random’ number generators we use are deterministic. Whether we can generate a sequence of truly random numbers is another matter, and I am starting to suspect I don’t even know what random actually means. We do, however, have tests describing properties we expect from ‘random’ numbers, including the Wald–Wolfowitz runs test:
Generate a long sequence of random floats on (0,1). Count ascending and descending runs. The counts should follow a certain distribution. [Wikipedia-1]
This test is more subtle than checking if the number of increasing and decreasing steps are the same, but rather checks the numbers appear to be independent and identically distributed. We know what we want: numbers going up and down, even if we can’t define random precisely.
Sometimes people say, “What goes up must come down.” This isn’t always true. For example, I could throw my keyboard out of the window, and it might go down. If, instead, I launched it out of the window with a suitable rocket, it could either reach escape velocity or orbit the planet. Some things do go up and down though. When values follow such ups and downs, they can be described as seasonal. Many trading strategies fall into either a trend following approach or a seasonal approach. The former tends to show a long term increase or decrease, while the latter cycles over a fixed period. For example, power usage might go up in the winter when the weather is colder and reduce in the warmer summer months. Trying to spot when prices have changed from trend following to seasonal is difficult, and usually relies on time series analysis.
Sometimes, you might think you have found a pattern. If I say 0, 1, 2, 3, 4, 5, you can guess what might happen next. If I then tell you these numbers were generated using the increment operator starting with
unsigned x= 0; you know the numbers will increase, to a point, then return to zero. What goes up might come down unexpectedly, which you may only realise if you have full details on the context. Some things appear to go up and down at the same time. Escher’s impossible staircase in his lithograph Relativity immediately springs to mind. Escher had been inspired by the Penrose stairs [Wikipedia-2], which appear to be going up and down simultaneously. Such a staircase is impossible in Euclidean geometry, but not infeasible in some pure mathematical models. As your perspective shifts when you view the picture so too, as your world view or framework changes, up may become down or vice versa.
If you’ve ever tried learning a new language, or even keeping up to date with new versions of the same language, you will be familiar with the rise and fall of learning. You might get to a point of considering yourself an expert, only to be confronted with everything moving under your feet, and falling back to feeling like a complete beginner again. You might feel like you are failing with simple things initially. I mused on this a while ago [Buontempo15], recalling the words of Batman’s father in Batman Begins when the young Bruce Wayne falls down deep into the bat cave:
And why do we fall, Bruce? So we can learn to pick ourselves up.
I am still excited about trying out new programming languages and technologies, but do sometimes experience a twinge of worry when reading the documents or trying out something for the first time.
I’ve been writing a C++ book to try to help people catch up if they got left behind with the various new features introduced over the last few standards [Buontempo23b]. There are many books out there which go into full detail, but I wanted to try out some small, self-contained projects showcasing a few of the newer features, partly for self-indulgent reasons and partly to see how well I can explain myself. I moved from feeling excited when the publisher accepted my proposal, to feeling overwhelmed and like a fraud. Imposter syndrome frequently rears its head. You can avoid feeling like this if you never try anything new, but where’s the fun in that? Trying to complete any project, be it an editorial or a book, tends to hit a shaky patch in the middle. You might start full of determination and find some extra stamina to make the finish line near the end, but the middle is always difficult. Several of my friends dropped out of university in the second year of a three year course. I keep trying to row 2km on a rowing machine in the gym and almost always grind to a near halt a bit over 1km. If I pace myself a bit and keep going, I get there, but it is hard work. One day I might manage it in less than 10 minutes. We shall see. Maybe latent heat is a good analogy for this sticky middle? If you heat a substance, its temperature increases for a while. It hits a point where the temperature ceases to increase, while the internal state changes, moving from a solid to a liquid or a liquid to a gas. I haven’t managed to write any book for the last few days. I shall tell myself it is latent heat. You can’t see the page count increase, but something is shaping up in my head. Once I’ve sorted out where I’m going next, the page count will start increasing again. Some things can be an up-hill struggle for a bit. It’s OK to pause and get your breath back.
Moving from physics to mathematics, we have points of inflection. If you draw a plot of y = x3, to the left of the origin the values are negative but increase, getting closer to zero. To the right, the numbers are positive and increase. At the origin, y is zero. This point is neither a maximum nor a minimum, but described as a point of inflection. Minimums, maximums and points of inflection each have a derivative of zero, and are collectively known as stationary points. The point of inflection might look slightly like an S, with the curvature changing from upwards to downwards or vice versa. They are notoriously sneaked into maths questions, because it is very easy to find a derivative of zero, and forget to check it is a maximum or minimum and not an inflection point. Sometimes things are more complicated than they first appear. The same happens when we write software. We might think we found a way to make code quicker, only to find it plateaus at some point. We might download a device driver, to be told we had 17 minutes remaining, 16, 15…, 53, then 2, then 19. And so on. Many things seem to trend in one direction, but then things change. The trick is spotting when you missed some information.
Hooke’s law tells us that the force needed to compress or extend a spring or other elastic object, is proportional, or scales linearly, with respect to the distance stretched or compressed. Until it isn’t. Wikipedia [Wikipedia-3] says:
An elastic body or material for which this equation can be assumed is said to be linear-elastic or Hookean.
If the equation “can be assumed” for some things, it cannot be assumed for others. And if a heavy enough weight is put on a spring it will extend and finally break, no longer being Hookean and in fact no long being very springy. You can draw a graph of stress (force) against strain (deformation) and see what happens. It might be linear to a point, known as the elastic limit. Then things might change.
Almost nothing is really linear, though linear models lead to simpler maths, so we often use them as an approximation. The trick is not to believe our own lies. If a simple model does not work, this setback can then be disheartening. We could recall the words in Monty Python’s Life of Brian:
Let us not be down-hearted. One total catastrophe like this is just the beginning!
However, it is more sensible to remind ourselves that learning and growing is often non-linear. If my attempt on the rowing machine slips by a few seconds one time, I will not give up. I still got some practice in, so I should be pleased with myself. Don’t beat yourself up if something doesn’t go as planned. Don’t give in to despair, like Reginald Perrin [IMDb], who
Disillusioned after a long career at Sunshine Desserts, Perrin goes through a mid-life crisis and fakes his own death.
Fear not, this old British comedy is very silly, and things work out for him in the end.
Though the weather has been cold recently, I can see signs of spring in our garden. We also still have several leaves rotting on the lawn from the autumn, which some cultures call the fall. Maybe we should call spring the rise? Things change, sometimes for the better, sometimes not. Whatever is going on in your life, hold on to the positives. Look out of the window and see the flowers starting to bloom, and then settle back and read Overload.
[Buontempo15] Frances Buontempo ‘Failure is an Option’, Overload 129, Oct 2015, https://accu.org/journals/overload/23/129/buontempo_2156/
[Buontempo23a] Frances Buontempo ‘Under Pressure’, Overload 173 Feb 2023 https://accu.org/journals/overload/31/173/buontempo/
[Buontempo23b] Frances Buontempo C++ Bookcamp (under development) https://www.manning.com/books/c-plus-plus-bookcamp
[Henderson22] Leah Henderson ‘The Problem of Induction’ in The Standford Encylopedia of Philosophy (Winter 2022 Edition), available at: https://plato.stanford.edu/archives/win2022/entries/induction-problem/
[Hume48] David Hume (1748) An Enquiry Concerning Human Understanding: https://www.gutenberg.org/files/9662/9662-h/9662-h.htm.
[IMDb] ‘The Fall and Rise of Reginald Perrin’: https://www.imdb.com/title/tt0073990/
[Wikipedia-1] Diehard tests: https://en.wikipedia.org/wiki/Diehard_tests
[Wikipedia-2] Penrose stairs: https://en.wikipedia.org/wiki/Penrose_stairs
[Wikipedia-3] Hooke’s law: https://en.wikipedia.org/wiki/Hooke%27s_law
has a BA in Maths + Philosophy, an MSc in Pure Maths and a PhD using AI and data mining. She's written a book about machine learning: Genetic Algorithms and Machine Learning for Programmers. She has been a programmer since the 90s, and learnt to program by reading the manual for her Dad’s BBC model B machine.