The human animal is a predictive animal. The popular opinion is that we’re a pattern recognizing creature, but in truth, we only care about patterns in so far as they allow us to predict a successive step. It’s why we spend fortunes on psychics and industry analysts (the same breed of charlatanry). Mankind’s chief sport and religion is to predict tomorrow, and the high priest is a breed called the futurist.
A futurist or futurologist is a social scientist who attempts to systematically predict the future, whether of advancement or society, or of life on earth. The particular futurists we’re interested in are far less lofty and can be found in the halls of Hollywood, advising on the next science fiction blockbuster or plugging away at the next great overlooked novel. They are often wrong — very wrong, on the simplest of topics. Apple product predictions are the perfect example. The rumor mill starts churning early and you hear every possible variation of the prediction, but the only predictions that are correct turn out to be the ones predicted based on some privy knowledge such as an inside source or a leaked image.
If you want a more common example, answer me this: where the hell is my flying car and spare jet pack? These things are already two decades late and counting. Actually, these particular devices present excellent examples of the problems with prediction. In short, human beings overestimate our exposure to randomness while underestimating our exposure to serendipity. Humans construct narratives to explain deviations. We fail to adequately grasp complexity or take into account the problem of iterated expectations and generally feel that we know all that can be known even though we consciously know that’s not the case. However, this inability to predict doesn’t mean that we can’t benefit from planning.
The Oxford dictionary defines random as “Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method or conscious choice; haphazard.” While this definition is sufficient for our purposes, it’s important to note that randomness has slightly different meanings in different subjects. The problem with prediction arises when, in the course of predicting things in the real world, we begin to compare real world randomness with the structured randomness of quantum physics and game theory. Structured randomness seeks to add a certain amount of mathematical purity to a system while ignoring key concepts, so we’re left with ridiculous notions. Notions like the bell curve or pronouncements such as the foolish Moore’s Law — which has objectively been false thus far — or Kurzweil’s silly law of accelerating returns. These systems seek to smooth out or add trends to what are in effect twists and turns.
Look back to your high school math classes and try to remember scatter graphs. You were taught to ignore the inherent randomness and to draw a line creating a trend. Any fool can divine direction, but randomness makes the details difficult and predictive precision suffers. We are blind, whether by self-imposition or biological programming, to random events and deviations. In fact, the word ‘deviation’ implies something that is behaving abnormally but will return to the fold.
Trends are the biggest “evil” of all in our inability to come to terms with randomness. Especially prevalent in economics is the belief that past performance can be used to predict the future. This is often proven absurd in the face of improbable events. Nassim Nicholas Taleb gave a brilliant analogy in the form of a ‘turkey’. A turkey lives its entire life getting fed and taken care of and may reasonably expect this behavior to continue. It can predict a positive trend as its feeding and care continues and has every right, by that logic, to expect the same to continue and thus awaken happy and aware of its guiding trend. That is, of course, until the day before Thanksgiving. Trends are only useful until they’re no longer trends.
It would take an entire book to describe how awful we are with dealing with randomness, and how we fail to cope. We create pleasing narratives to explain anomalies away and then forget them, or we convince ourselves that they are the new status quo and upend our lives (see 9/11 and the Twin Towers). The bottom line is that all major advancements that have occurred have been because of a disruptive object, a random event, and not anything as clearly predictable as we later narrate it to be. The modern world owes a lot more of itself to serendipity than genius.
A Serendipitous Century
Perhaps the best example of serendipity would be the laser. Gordon Gould invented the optical laser though the first successful
optical laser was the ruby laser by Theodore Maiman two years later. The purpose was to split light for science. The outcome was unforeseen advancements that continue today. The invention of the laser led to a mastery of light that would spark everything from the Blu-Ray player in a PlayStation to the guidance systems attached to the most advanced missiles in the world. The laser is a watershed device on which the modern world rests, and yet it wasn’t predicted until Einstein in 1917, nor were there logical advancements leading to it or a clear roadmap of all the remarkable things to come of it after.
A walk through human advancement and societal advancement is a walk through a park made up of random fixtures, few predicted before you arrived. Human beings are exceedingly good at being surprised, only to turn around and create stories explaining what was previously unforeseen. It’s not always good things that lend themselves to this narrative fallacy. If you return to World War I, as history books would have us believe, the pieces were set and tensions were high. Everyone knew that a great war was coming though no one knew how big. To modern historians, World War I should not have been a surprise. Someone should remind historians the old saying about hindsight. An actual dip into history would show you that *no one at the time had any inkling of what was over the horizon*. The one thing that all players in WWI had in common was surprise.
The tendency and desire to underestimate the role of randomness and serendipity is aided by this incredible ability we have to find patterns and create comfortable stories. Everything that happens, with hindsight, ends up looking inevitable, planned, and obvious. Simple logic and common sense could show us that we could never have truly known because the options are just too complex.
When attempting to predict something, the more complex the situation, the more precision is needed. This is not at all controversial — anyone can come to this conclusion. In a complex situation, the more steps ahead you need to predict, the more precise your previous predictions have to be. Thus, predicting 5 steps ahead requires your first step to be very precise, but by your prediction of step ten, the precision needs of step one increase exponentially. Simply put, a short term prediction can handle a little deviation, but the further out you go, the more cumulative the deviation becomes because of compounding. Each step proceeds from an already somewhat wrong step. This isn’t hard to wrap our minds around when we recognize complexity. The problem is, we rarely recognize complexity, and often underestimate it when we do.
Take the example of a game of billiards taken from the book “The Black Swan.” The example was first computed by the mathematician Michael Berry. If you know a set of basic parameters concerning a ball at rest, can compute the resistance of the table, and can gauge the strength of the impact, then predicting what
would happen at the first hit is easy. As noted, the need for precision in our assumptions increases and the second impact becomes more complicated, but possible, as long as you’re careful about your knowledge of the initial states. By the time you get to computing the ninth impact, you need to take into account even minutia like the gravitational pull of someone standing next to the table. By the 56th impact, every single particle of the universe needs to be addressed in your assumptions! Even an atom 100 billion light years away has to be accounted for. Now, consider the additional difficulty of predicting where those variables will be in the future, so that you can make an informed prediction of the future.
Note that this billiard example assumes that the world is simple and doesn’t take into account other complex issues like the mood and personality of the players. Yet, this is exactly the type of prediction that we make daily about the far future and the players in it. A world without free will is already random, complex and nearly impossible to predict. Free will isn’t helping. (This argument is the reason for the inherent weakness in economic predictions owing to their basis on rational self-interest). So given this complexity and our naiveté to randomness and serendipity, why do we exert so much energy trying to predict the future and never seem to notice how bad we are at it? The simple answer is that we think we’re better at predicting that we really are.
You’re Not As Good As You Think
Now that you have some idea of complexity, be sure to laugh the next time you see weather forecasts a week in advance. What little success we have in predicting weather can be explained by iterated expectations; otherwise it’s mainly a waste of time. We think we are better at predicting the future than we really are. Try this little experiment, which was actually performed before — but the outcome was not what they meant to find. Fill a room with people. Randomly pick a number. The number could be anything from how many zits are on your little nephew’s face or how many tiles are on your bathroom wall. Ask each person in the room to independently estimate the number set in such a way that they have a 98 percent chance of being right, and less than two percent chance of being wrong (presumably you know the answer). Basically, whatever range they choose has about a two percent chance of being wrong. Using the bathroom tile example I could say, “I believe with 98 percent certainty that there are between 250 and 1,450 tiles in the bathroom.”
You would think that not much more than two out of a hundred participants would be wrong. The participants are free to set the range as high as they want. You’re not gauging their knowledge but their evaluation of their own knowledge. The people who did the study found that the expected two percent error rate turned out to be closer to 45 percent in the group. Repeat after repeat, they came to the same conclusion. Many times, we are too comfortable and confident in what we know — comfort that’s often undeserved. In fact, the more people trusted in their own knowledge, the more wrong they tended to be. This result cut across all cultures, even those that favor humility. I did the test on myself. The estimate of number of tiles above was my real estimate in my own bathroom that I have used for years. There were actually 180 tiles. I was genuinely shocked as to how much I overestimated my own knowledge on the matter. I expected to be the exception that proved the rule. One interesting note is that those who claimed more confidence in their own knowledge tended to overshoot the range, while those with less confidence guessed low but used a wider range.
Geeks know this assessment to be true. Month after month, we are inundated with predictions and expected sales figures and numerous far flung expectations by “experts” who are very confident in their predictions, yet are often wrong. The above noted study found that in any field across the board, but especially in the stock market, experts were at best on par, but often worse at predicting than laymen. What about the few times we correctly predict something? In some cases, it’s serendipity or luck. Randomness doesn’t have to work against you. By definition, it can either fall in your favor or against. In other cases, there is a bit of subtlety that can be answered by the idea of iterative expectations, though not directly.
You Expected What You Expect
A review: we overestimate what we know and underestimate uncertainty by compressing its range. We distort the effect of the uncertain on our lives. Look at the average marrying couple. You can tell them that between a third and a half of all marriages end in divorce, and they may see the possible signs in other couples, but they cannot come to terms that there is in effect a 50/50 chance that they will be divorcing. Of course, you can see a possible hint as to why we may have evolved to overlook these things. We seem to have been bred to be optimists. We’ve covered this; let’s talk now about iterated expectations.
To put the concept simply: If I expect to expect something at some date in the future, then I already expect that something at
present. If we return to the book “The Black Swan”, there’s an excellent illustration. Picture the invention of the wheel. If you’re a Stone Age thinker asked to predict the future for your tribe, you MUST predict the invention of the wheel or you will miss essentially all the action. But if you can prophesy the invention of the wheel, you already know what a wheel looks like, and thus you already know how to build a wheel, so you’re already on your way. I hinted earlier that this system is a large part of the seeming predictive accuracy, little that there is, that we find.
The analyst that predicts the new Apple iDevice is usually only correct when his “prediction” is based on leaked photos and confidential reports. Ask him to predict five years on and he is stumped, though that won’t stop him from yapping anyway. In the year 2000, no one predicted iTunes or the iPod in 2001. Those that had inklings based on leaks, could not have predicted the state of the internet today, the iPhone or even the rise of Android. For that matter, no one who first worked on the creation of what would be the internet predicted the internet as it’s come to be. So we find that those who get it right often have the benefit of privileged information and thus aren’t actually predicting, but reporting. To predict the unknown is objectively impossible because it would require knowing the unknown.
This brings us to the bloody Singularity. If you can’t guess what I’m about to say about that quasi-intellectual, pseudo-religious drivel, then I suggest you go back and read what I’ve written before this. (To be fair, the ‘singularitists’ are making marked advancements; but I will bet against them 9 out of 10 times because they do not possess some extraordinary sorcery that I lack. They are a classic example of the trap of iterated expectations because they work actively to create a future, report their ongoing work, except they call it a prediction. Note that their success rate at these “predictions” is not any higher than that of the layman despite their proactive collusion). Proponents of the singularity are guilty of underestimating complexity while simultaneously overestimating their ability to either predict, create, or curate the future.
If you doubt this, just look around you and notice that the computer you’re using is many times smaller and more powerful than a lot of people predicted it would be at this stage. In many ways, we have achieved levels of advancement barely dreamed of. Yet, some of the more common prediction, like flying cars, jet packs and free energy still remain unfulfilled. Inventions we wished for and predicted with a desperate fervor have failed to materialize, yet more than anything else, the past century has been one of unparalleled serendipity. Consider that we have no flying cars and personal jet-packs because we’ve developed technologies that make them unnecessary. When flying cars were dreamed up, airplanes were barely able to make a continental trip let alone non-stop flights from New York to Singapore. Consider that our wildest dreams for radio became moot the moment we developed satellites and all the attendant technologies of the space age. Do you think anyone had any idea how microwave technology would change the way families ate and the way kitchens would be designed?
I don’t mean to specifically devalue those predictions that have proven correct. Like I said earlier, randomness is indiscriminate and even a broken clock is right two times a day. All I’m trying to contend is that human beings underestimate randomness and uncertainty and the effects of the highly improbable. Humans overestimate their own knowledge and their predictive capabilities. In fact, in a strange way, information can often be the enemy of knowledge, experts – the more confident in their abilities they tend to be – tend to be worse at predicting than the non-expert. I’m contending that luck and serendipity do in fact become more useful than invention and hard work the more complex a system becomes. I’m contending that human beings are extremely bad at comprehending complexity and devote far too much energy to creating pure sciences and simple equations where true randomness (as opposed to chaos, which does have a pattern) exists. We’re good at “predicting” (as opposed to the unquoted predicting – denoting a measure of derision in this case) when we manipulate iterated expectations. We are exceedingly good at constructing narratives out of hindsight to explain away massive and unexpected deviations. Basically, we suck at predicting the future because we evolved to be proactive optimists. But just because we suck at it doesn’t mean we shouldn’t plan.
“I’m an eternal optimist and I fancy myself a futurist. You’re right, though. The real “futurists” are just people who feel a ‘higher mission’ and live to make the world better for future generations. Also, it pisses me off when snobby shitheads think I’m misusing the term “futurist” because it doesn’t fit their art-focused definition.”
— Adam Barrera @Highmileage