Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school. Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host. And now, on to the math.
The last episode’s discussion of randomness brought to mind the classic book “The Black Swan” by economist-philosopher Nassim Nicholas Taleb. His books discuss the disproportionate role of unlikely extreme events, the Black Swans, in shaping our lives and our history. Noticing online that there is a 2nd edition now, I decided to reread Taleb’s book, and got many intriguing new ideas for podcast episodes. Today we will talk about the “Ludic Fallacy”, the incorrect use of mathematical models and games to predict real-life events. To understand this better, let’s look at one of his key examples.
Suppose we do an experiment with two observers in a room, a professor and a gambler. We present them the following mathematical puzzle: I have a fair coin that I plan to flip 100 times, with everyone watching. The first 99 flips are all heads. The two observers are asked to estimate the probability that the next flip will turn up heads. The professor confidently answers, “Since you said it’s a fair coin, previous flips have no influence on future flips. So the chance is the same as always, exactly 50%.” On the other hand, the gambler answers, with equal confidence, “If you got 99 heads, I’m almost certain that the coin is biased in some way, regardless of whether you said it’s fair. So I’ll estimate a 99% chance that the next flip is heads.” Naturally, in a purely mathematical sense, the professor was right, according to the information we provided. But if this were a real-life situation, and you had to bet money on the outcome of the next flip, which answer would you go with? The gambler probably has a point.
And this is Taleb’s key insight that forms the Ludic Fallacy: while abstract mathematical models may provide some insight into possibilities, you cannot consider them reliable models of real life. Issues or events that are outside your simple model may have a huge effect. Taleb criticizes a lot of professionals who spend their lives creating complex mathematical models, and claim that they deserve large salaries or become media darlings for using them to make intricate predictions about the future, which then turn out to have little more accuracy than random chance. Economists are some of the most notorious in this regard. You may recall that back in the 1990s, a large hedge fund called Long Term Capital Management (or LTCM) was built around some insights from supposedly genius economists who had Nobel Prizes. But when its “mathematically proven” strategy led to buying massive numbers of Russian bonds with borrowed money, which then defaulted, LTCM failed so badly that it needed a multi-billion dollar bailout to avoid crashing the world economy.
There are plenty of other examples like this, and it’s not just experts who fall for this kind of fallacy. Taleb is a bit critical of the modern software, such as features in Microsoft’s Excel spreadsheets, that make it easy for even ordinary workers to mathematically extend existing data into future extrapolations, which are very rarely accurate in the face of unpredictable real-life events. In effect, computers allow anyone to transform themselves into an incompetent economist with high self-confidence.
I think my favorite example that Taleb cites is the story of a casino he consulted with in Las Vegas. They had very meticulously modeled all the ways that a gambler could cheat, or that low-probability events in the games might threaten their cash flow, and had invested massive amounts of money in gambling theory, security, high-tech surveillance, and insurance to guard against these events. So what did the four largest money-losing incidents in their casino turn out to be?
1. The irreplaceable loss of their headline performer when he was maimed by one of this trained tigers.
2. A disgruntled worker, who had been injured on the job, attempted to blow up the casino.
3. An incompetent employee had been putting some required IRS forms in a drawer and failing to send them in, resulting in massive fines.
4. The owner’s daughter was kidnapped, and he illegally took money from the casino in order to ransom her.
Now of course, it would have been very hard for any of these to be predicted by the models the casino was using. That’s Taleb’s point: no mathematical modeling could cover every conceivable low-probability event.
This is also an important reason why Taleb opposes centrally-planned economies. One of the few Nobel-winning economists who Taleb respects is F.A.Hayek, whose 1974 Nobel speech offered a harsh critique of his fellow economists who fall back on math due to their physics envy, and try to claim that their equations model the world just like the hard sciences. No matter how many measurable elements they factor into their equations, the real world is much too complicated to model accurately and make exact predictions. Modern free economies are largely successful because millions of individuals make small-scale decisions based on local information, and are free to take educated risks with occasional huge payoffs for society in general. In his conclusion Hayek wrote, “The recognition of the insuperable limits to his knowledge ought indeed to teach the student of society a lesson of humility which should guard him against becoming an accomplice in men’s fatal striving to control society – a striving which makes him not only a tyrant over his fellows, but which may well make him the destroyer of a civilization which no brain has designed but which has grown from the free efforts of millions of individuals.”
We should mention, however, that Taleb and Hayek are not showing that mathematical models are totally useless— we just need to recognize their limitations. They can be powerful in finding possibilities of what might happen, and opening our eyes to potential consequences of our basic assumptions. For example, let’s look again at the coin flipping case. Suppose instead of 99 heads, our example had shown a variety of results, including a run of 5 heads in a row somewhere in the middle. The gambler might spot that and initially have a gut feeling that this is an indication of bias. But the professor could then walk him through a calculation, based on the ideal fair coin, that if you flip a coin 100 times, there is over an 80% chance of seeing a run of length 5 at some point. So using the insight from his modeling, the gambler can determine that this run is not evidence of bias, and make a more educated guess, considering that the initial promise of a fair coin has not yet been proven false. Remember, however, that the gambler still needs to consider the possibilities of external factors that are not covered by the modeling— maybe as he is making his final bet, that disgruntled employee will return to the casino with an angry tiger.
So, in short, you can continue to use mathematical models to gain limited insight, but they are not confident sources for practical predictions. Don’t get overconfident and fool yourself into making big bets that your model will guarantee discovery of all real-life risks.
And this has been your math mutation for today.