Saturday, July 2, 2022

279: Improbable Envelopes

 Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.

Today we’re going to talk about a well-known paradox that a co-worker recently reminded me about, the Two Envelopes Paradox.   It’s similar to some others we have discussed in past episodes, such as the Monty Hall paradox, in that a slightly incorrect use of the laws of probability gives an apparent result that isn’t quite correct.

Here’s how the basic paradox goes.   You are shown two envelopes, and told that one contains twice as much money as the other one, with the choice of envelopes for each amount having been pre-determined by a secret coin flip.  No information is given as to the exact amount of money at stake.   You need to choose which envelope you want.   After you initially point at one, you are told the amount of money in it, and asked, “Would you like to switch to the other one?”   Since you have been presented no new information about which envelope has more money, it should be obvious that switching makes no difference at this point, as either way you have a 50% chance of having guessed the right one.

But let’s calculate the expected value of switching envelopes.   Say the envelope you chose contains D dollars.   If you stick with your current envelope, the expected amount you will get is simply D.  The other one contains either 2D or D/2 dollars, with 1/2 probability for each.   The expected value if you switch is then 1/2*2D + 1/2*D/2, which adds up to (5/4)D.    Thus your expected winnings if you switch are greater than the D you would gain from your first choice, and you should always switch!   But this doesn’t make much sense, if you had no additional information.   Can you spot the flaw in this reasoning?

The key is to recognize that you’re combining two different dollar amounts in your expected value calculation:  the D that exists in the case where you initially chose the smaller envelope is different from the D if you chose the larger one.   The easiest way to see this is if you define another variable, T, the total money in the combination of two envelopes.   In this case, the larger one contains (2/3)T, and the smaller has (1/3)T.   Now your expected winnings become the same as the expected value from switching, (1/2)*(2/3)T + (1/2)*(1/3)T, or simply T/2.  Alternatively, you could have come to the same conclusion by modifying our original reasoning using Bayes’ Theorem, replacing our reuse of D with correct calculations for the conditional values in each envelope.

But weirdly enough, seems to point out an odd strategy that will let you choose whether to switch with a greater than 50% chance of getting the higher envelope.    Here’s how it works.   After you choose your first envelope, choose a random value N using a normal probability distribution.   This is the common “bell curve”, with the important property that any number on the number line has a probability of being chosen, as the ‘tail’ of the bell asymptotically approaches and never reaches 0.   So if the center of the normal distribution is at 100, you have the highest probability of choosing a number near 100, but still a small chance of choosing 1000, and a really tiny chance of choosing 10 billion.   Therefore, if you’ve chosen a number N using this distribution, there is some probability, P, that N is between the dollar values in the two envelopes.   Now assume the envelope you didn’t choose contains that number N, and choose to switch on that basis:  if N is greater than the amount in the envelope you chose, you switch, and otherwise keep your original envelope.

Why does using this random number help?   Well, if your random number N was smaller than both envelopes, then you will always keep your first  choice, and there is no change to the overall 50-50 chance of winning.   If it was larger than both, you’ll always switch, and again no change.   But what if you got lucky, and N was between the two values, which we know can happen with nonzero probability P?   Then you will make the right choice:  you will switch if and only if your initial envelope was the smaller one!    Thus, with probability P, the chance of N being in this lucky interval, you will be guaranteed a win, while the rest of the time you have the original odds.   So the overall chance of winning is (1/2)*(1-P) + 1*P, which is slightly greater than 1/2.   

Something seems fishy about this reasoning, but I can’t spot an obvious error.   I remember also hearing about this solution from a professor back in grad school, and periodically tried searching the web for a refutation, but so far haven’t found one.   Of course, with no information about the actual value of P, this edge can be unpredictably small, so is probably of no real value in practical cases.    There also seems to be a philosophical challenge here:  How meaningful is an unpredictable bonus to your odds of an unknowable amount?   I’ll be interested to hear if any of you out there have some more insight into the problem, or pointers to further discussions of this bizarre probability trick.

And this has been your math mutation for today.


Sunday, May 22, 2022

278: Bicycle Repair Man

 Audio Link

With the arrival of summer weather in many parts of the U.S., it’s time to think again about various outdoor activities.    Watching a few bicycles pass by my house the other day brought to mind a famous anecdote about pioneering mathematician and computer scientist Alan Turing.   As you may recall, Turing was the famous British thinker who not only founded theoretical computer science, but also was the primarily visionary in the project to crack the German Enigma code, a key contribution to the Allied victory in World War II.   As often happens with such geniuses, his personal life was very odd, though he usually had reasons for whatever he did.   For example, he was mocked as ridiculously paranoid for chaining his favorite coffee mug to a radiator.  But there are rumors that years later, a pile of smashed coffee mugs was found near his old office, apparently thrown away by a disgruntled co-worker.    Another crazy story involves Turing’s efforts to repair his bicycle.

As the story goes, Turing noticed that every once in a while, the chain on his bike would come loose, and he would have to stop and kick it a certain way to get it back on its track again.   After a while, he noticed that the intervals at which this was happening seemed kind of regular, so decided to check that theory rigorously.   He attached a mechanical counter, and started measuring the exact interval at which this problem was occurring.    It turned out he was right— the intervals were regular.   The number of clicks between failures was proportional to the product of the number of spokes in his wheel, the number of links in the chain, and the cogs in the bicycle gears.   Once he discovered that, he took a close look at the components, and soon discovered the root cause:   there was a particular spoke that was slightly bent, and when this got too close to a particular link in the chain that was also damaged, the chain would be pulled off.   Armed with this knowledge, he was able to correctly fix the bent spoke and resolve the problem.    It’s been said that any competent bike repairman would have spotted the issue in a few minutes without bothering with counters and intervals, but the mathematics of that would be pretty boring.

Naturally, as with any anecdote about someone famous, there are some alternate versions of this story.   My favorite is the one that changes the ending slightly:  once Turing figured out the formula for when the chain would jump off, he started carefully calculating the intervals as he rode the bike, and stopping to kick it at the exact right times he calculated.   That’s a fun one, and certainly fits into the stereotypes of Turing’s eccentricity.   But I do find it a bit hard to believe.  When riding a bike outdoors, there are lots of variables involved to interrupt your concentration:  road obstacles, changing inclines, approaching cars, etc.   Could someone safely riding a bicycle successfully keep a running count of the wheel and chain rotations, over a continuous ride of several miles?   And in Turing’s case, it was further complicated by the fact that he always wore a gas mask as he rode, to prevent triggering his allergies.   But the alarm clock he was known to wear around his waist might have helped.

In honor of this story, there was actually a proposal back in 2015 to name a bicycle bridge in Cambridge, England after Turing.   I didn’t see any further references to this online, so it looks like it didn’t pass.   But there’s plenty of non-bicycle-related stuff named after him in the computer science world.    If you want someone to propose a bicycle bridge in your name, next time your bike breaks down, think about the clever mathematical tricks you might use to diagnose the issue.   Also, remember to found a branch of mathematics and win a world war.    Or forget about the bridge, and just take your bike to a competent repair shop.

And this has been your math mutation for today.


Wednesday, March 30, 2022

277: Bad Career Advice

 Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.

As I mentioned in the last episode, I’ve recently reread Nassim Nicholas Taleb’s  classic book “The Black Swan” , about the disproportionate role of unlikely extreme events, the Black Swans, in shaping our lives and our history.    Today I’d like to discuss another of the intriguing ideas he discusses in the book:  scalable and unscalable jobs, and which you should choose if starting out your career.   

Back when he was in college, Taleb received some advice from a business student:  choose a scalable, rather than a nonscalable, profession in order to become rich.    It’s a pretty simple concept:  a scalable job is one where you are paid for ideas, not hourly labor, and thus can affect many people with a small amount of work.   This contrasts with nonscalable jobs, where you are directly paid for the labor you perform.    An example of a scalable job is a corporate CEO, a derivatives trader, or an author:   in any of these professions, a small amount of work can impact a massive number of people.    On the opposite end of the spectrum, you can think of cases like a dentist or a chef:   your services are inherently delivered one-on-one, and your output is essentially dependent on the time you spend.     Some like to refer to scalable professions as “idea” professions, and nonscalable ones as “labor” professions.

You can see why scalable work has the potential to earn much more money, since it’s a matter of simple math.   If your actions affect one or a small number of people, there is fundamentally less money to go around, since whatever you earn must ultimately come from the people you serve.    If your actions affect millions of people, then there is the capacity to draw in money from many directions; as Taleb phrases it, this can “add zeroes to your output… at little or no extra effort.”   An idea person does the same amount of work no matter how many people it affects.   The CEO gives his orders once, the derivatives trader presses the same button regardless of the quantity traded, and the author writes the book once.     Taleb also glosses over the fact that some professions are kind of in-between:  for example, a chip design engineer at Intel impacts many millions of customers, though that impact is shared with about 100,000 other co-workers, creating a pretty good income potential, though unlikely to make someone super-rich unless they progress high on the even-more-scalable management ladder.  

So, was the advice correct, to choose a scalable rather than a nonscalable profession?   Well, it is true that if you go around looking at super-rich people, almost all did it through being in the scalable world.  But be careful:   if you make this kind of observation, you are making a fundamental logical fallacy, confusing “A implies B” for “B implies A”.    If someone is rich, they probably got that way through a scalable profession— but does that mean that if an arbitrary person chooses a scalable profession, they are likely to become rich?   The answer to that is a definite NO.   For every J.K Rowling or Nassim Nicholas Taleb, there are thousands of aspiring authors who barely sell a copy of their book.   (I won’t comment on how the Math Mutation book fits into this discussion.)   The scalable professions are massively profitable for the small number of people who are successful, but what’s a lot less visible are the corresponding masses of unsuccessful aspirants to these careers who failed miserably.     

Thus, Taleb points out that the advice he got was not very good, even though he happened to follow it and succeed himself.    In a nonscalable profession, the average worker makes a decent living— in some, like skilled tradesmen, engineers, or doctors, they can be pretty sure of heading towards the upper middle class if they do a good job, even though they are not likely to become rich.   Choosing a scalable profession is like entering a lottery, while nonscalable or mid-range jobs give you much better odds of earning a solid living.   And of course, there is usually the possibility of later moving into management of whatever career path you’re on, if you decide later that you want to gamble on the scalable route, while having a solid profession to fall back on if you don’t win that lottery.  

Another interesting point Taleb makes is that the nature of some professions changes over time.   If you look to the 19th century or earlier, being a singer was a nonscalable, labor-type position:  you had to be physically present before a relatively small audience, and repeat that activity every time you wanted to earn money for your music.     Thus many singers across the world could make a living producing music for eager audiences.   Then came the 20th-century revolution in recorded sound.   Once that happened, you could see superstars like Elvis, Pavarotti, or the Beatles become household names— and the majority of the money that in previous years would have been distributed among thousands of local performers ended up in their pockets.   For those who think this situation was unfair to the local singers, think about the effect of the printing press on monks, or of the invention of the alphabet on traveling storytellers.     The truth is, making a job more scalable almost always benefits huge numbers of people who consume the goods or services being produced, while creating an inconvenience for those who had previously profited off the nonscalability of their jobs.

Another interesting point Taleb makes is to look at this concept at a national level.  In the case of the United States, it seems to have been much more economically successful in the past century than the intellectual European nations of, as Taleb puts it, “museumgoers and equation solvers”.    He attributes this to the US’s much greater tolerance for creative tinkering and trial-and-error, which result in the development of new concepts and ideas.   Because the economic benefits of concepts and ideas are scalable, this has resulted in a large multiplier on the money that can be made by US companies in general.    The much-lamented loss of US manufacturing jobs is just a reflection of this shift of focus.   Nike can design a shoe, or Boeing can design an airplane, with a relatively small number of ideas, and subcontract the grunt work to foreign companies.   

I wish I could say that I pondered all these insights when initially setting out on my career, but like most of us, I just blundered my way around until I settled into something that seemed good.   It seems to have worked out pretty well for me.    But if you’re at an earlier stage of your life, Taleb’s ideas are worth strong consideration. 

And this has been your math mutation for today.


Sunday, February 20, 2022

276: Don't Believe the Math

Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.

The last episode’s discussion of randomness brought to mind the classic book “The Black Swan” by economist-philosopher Nassim Nicholas Taleb.    His books discuss the disproportionate role of unlikely extreme events, the Black Swans, in shaping our lives and our history.    Noticing online that there is a 2nd edition now, I decided to reread Taleb’s book, and got many intriguing new ideas for podcast episodes.   Today we will talk about the “Ludic Fallacy”, the incorrect use of mathematical models and games to predict real-life events.   To understand this better, let’s look at one of his key examples.

Suppose we do an experiment with two observers in a room, a professor and a gambler.   We present them the following mathematical puzzle:   I have a fair coin that I plan to flip 100 times, with everyone watching.   The first 99 flips are all heads.   The two observers are asked to estimate the probability that the next flip will turn up heads.    The professor confidently answers, “Since you said it’s a fair coin, previous flips have no influence on future flips.   So the chance is the same as always, exactly 50%.”   On the other hand, the gambler answers, with equal confidence, “If you got 99 heads, I’m almost certain that the coin is biased in some way, regardless of whether you said it’s fair.   So I’ll estimate a 99% chance that the next flip is heads.”    Naturally, in a purely mathematical sense, the professor was right, according to the information we provided.   But if this were a real-life situation, and you had to bet money on the outcome of the next flip, which answer would you go with?   The gambler probably has a point.

And this is Taleb’s key insight that forms the Ludic Fallacy:   while abstract mathematical models may provide some insight into possibilities, you cannot consider them reliable models of real life.     Issues or events that are outside your simple model may have a huge effect.   Taleb criticizes a lot of professionals who spend their lives creating complex mathematical models, and claim that they deserve large salaries or become media darlings for using them to make intricate predictions about the future, which then turn out to have little more accuracy than random chance.   Economists are some of the most notorious in this regard.    You may recall that back in the 1990s, a large hedge fund called Long Term Capital Management (or LTCM) was built around some insights from supposedly genius economists who had Nobel Prizes.   But when its “mathematically proven” strategy led to buying massive numbers of Russian bonds with borrowed money, which then defaulted, LTCM failed so badly that it needed a multi-billion dollar bailout to avoid crashing the world economy.      

There are plenty of other examples like this, and it’s not just experts who fall for this kind of fallacy.   Taleb is a bit critical of the modern software, such as features in Microsoft’s Excel spreadsheets, that make it easy for even ordinary workers to mathematically extend existing data into future extrapolations, which are very rarely accurate in the face of unpredictable real-life events.   In effect, computers allow anyone to transform themselves into an incompetent economist with high self-confidence.

I think my favorite example that Taleb cites is the story of a casino he consulted with in Las Vegas.  They had very meticulously modeled all the ways that a gambler could cheat, or that low-probability events in the games might threaten their cash flow, and had invested massive amounts of money in gambling theory, security, high-tech surveillance, and insurance to guard against these events.   So what did the four largest money-losing incidents in their casino turn out to be?    

1. The irreplaceable loss of their headline performer when he was maimed by one of this trained tigers.

2. A disgruntled worker, who had been injured on the job, attempted to blow up the casino.

3. An incompetent employee had been putting some required IRS forms in a drawer and failing to send them in, resulting in massive fines.

4. The owner’s daughter was kidnapped, and he illegally took money from the casino in order to ransom  her.

Now of course, it would have been very hard for any of these to be predicted by the models the casino was using.   That’s Taleb’s point:  no mathematical modeling could cover every conceivable low-probability event.

This is also an important reason why Taleb opposes centrally-planned economies.   One of the few Nobel-winning economists who Taleb respects is F.A.Hayek, whose 1974 Nobel speech offered a harsh critique of his fellow economists who fall back on math due to their physics envy,  and try to claim that their equations model the world just like the hard sciences.  No matter how many measurable elements they factor into their equations, the real world is much too complicated to model accurately and make exact predictions.   Modern free economies are largely successful because millions of individuals make small-scale decisions based on local information, and are free to take educated risks with occasional huge payoffs for society in general.   In his conclusion Hayek wrote, “The recognition of the insuperable limits to his knowledge ought indeed to teach the student of society a lesson of humility which should guard him against becoming an accomplice in men’s fatal striving to control society – a striving which makes him not only a tyrant over his fellows, but which may well make him the destroyer of a civilization which no brain has designed but which has grown from the free efforts of millions of individuals.”

We should mention, however, that Taleb and Hayek are not showing that mathematical models are totally useless— we just need to recognize their limitations.   They can be powerful in finding possibilities of what might happen, and opening our eyes to potential consequences of our basic assumptions.   For example, let’s look again at the coin flipping case.    Suppose instead of 99 heads, our example had shown a variety of results, including a run of 5 heads in a row somewhere in the middle.   The gambler might spot that and initially have a gut feeling that this is an indication of bias.   But the professor could then walk him through a calculation, based on the ideal fair coin, that if you flip a coin 100 times, there is over an 80% chance of seeing a run of length 5 at some point.   So using the insight from his modeling, the gambler can determine that this run is not evidence of bias, and make a more educated guess, considering that the initial promise of a fair coin has not yet been proven false.    Remember, however, that the gambler still needs to consider the possibilities of external factors that are not covered by the modeling— maybe as he is making his final bet, that disgruntled employee will return to the casino with an angry tiger.

So, in short, you can continue to use mathematical models to gain limited insight, but they are not confident sources for practical predictions.   Don’t get overconfident and fool yourself into making big bets that your model will guarantee discovery of all real-life risks.

And this has been your math mutation for today.


Saturday, January 22, 2022

275: Demanding Fair Dice

 Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.

Recently I was amused to hear about a strange machine called the “dice-o-matic”, a monstrous 7-foot tall machine that can physically roll thousands of dice per day.   It uses a camera to automatically capture the results and serve them up on the internet, for players of online board and role-playing games who need trustworthy rolls of virtual dice.   This was created by the hosts of the website, as a result of demands by players for real dice rolls to apply to their games.    At first I thought this was rather silly— after all, nearly every programming language these days offers an option to generate a random number, so why would someone need physical dice rolls?   But as I read more about it, I could sort of understand the motivation.

To start with, let’s review how most random numbers are generated on a computer.   Typically these are not truly random numbers, but pseudo-random:   each number is derived from the previous one, but in such a way that they would effectively appear random to an external observer not aware of the algorithm used.    For a given “seed”, or starting number, they will always generate the same sequence.   The most common algorithm for such numbers is the Linear Congruential number generator.   For example, suppose you are trying to generate a random number between 0 and 7.   A simple Linear Congruential generator might take the current number, multiply it by 5, add 1, and then calculate the remainder modulo 8.   If we start with the number 0, we get the sequence 0,1,6,7,4,5,2,3, a pretty random-looking sequence of 8 numbers.    That sequence would then repeat forever— this type of generator naturally has that property.   But in real life, the numbers used for the multiplication, addition, and modulo operators would be much higher, so the cyclic nature would almost never be exposed.   By the way, the deterministic nature of these pseudo-random numbers can actually be beneficial in many applications:  for example, if testing a microprocessor design with random simulations, knowing the seed that led to discovering a malfunction enables you to easily repeat the exact same random test.

Usually you would try to get some number from a random physical source to set the seed, so you would not have the same sequence of numbers every time.  On unix systems, the /dev/random file looks for environmental noise, such as device driver delays, for storage in an “entropy pool” of random starting seeds.   There are other even better methods:  quantum physics supplies true randomness, through the timing of radioactive decays, for example.   And there are silly-sounding methods online that can be quite effective, such as the program at for generating random values using patterns observed in lava lamps.    Various online services provide true random numbers generated through other methods.    Of course, if you can generate these seeds with true randomness, you might argue that you should generate all your random numbers this way instead of using a pseudo-random algorithm.  But remember that current microprocessors run blazingly fast, with billions of operations per second.   Anything requiring input from the external world is going to create extra delays that are huge compared to internal operations.    So it’s much more common to use these true random sources just for the seed.

Which brings us back to the motivation for the dice-o-matic.   According to the description on their website, users were dissatisfied with the randomness of their earlier pseudo-random method.   They write:  “Some players have put more effort into statistical analysis of the rolls than they put into their doctoral dissertation.”   Personally, I’m a bit skeptical—   in any set of random numbers, you are going to see streaks of values that appear non-random.   There may be a run of 20 low numbers in a row, or an ascending 1-2-3-4-5-6 in a few places, etc.    I bet most of the complaining users had lost a game or had their D&D characters killed due to bad rolls that appeared in such a sequence, and were looking for external causes to blame.   On the other hand, if they put in enough effort, it should often be possible to demonstrate the non-randomness of pseudo-random numbers.   The website points out some applications, such as scientific simulation of virus infections, where this lack of randomness has had detectable effects.  In any case, the people took the issue seriously, and the dice-o-matic was their solution, freeing them from the dangers of pseudo-randomness.

We should also keep in mind, however, that dice actually aren’t the greatest sources of true random numbers either:   wear and tear will cause them to regularly depart from true randomness over time, as well as other possible issues depending on the manufacturing process, like the different amounts of material cut out to make the pips on each side.    Personally, I think the gamesbyemail people might have been better off installing a few number-generating lava lamps.

And this has been your math mutation for today.


Tuesday, November 30, 2021

274: The Gomboc

 Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.

If you heard the title of this podcast, you’re probably asking yourself, “What did he say?”   Today we are discussing a 3-dimensional shape that was first discovered only in the 21st century, by a pair of Hungarians named  Gábor Domokos and Péter Várkonyi,.    It’s called the Gomboc, spelled g-o-m-b-o-c,  with two dots over each of the o’s due to its Hungarian origin.   It looks kind of like a spherical stress ball whose top has been partially squeezed in; we can’t really do it justice in a verbal description, but you can find links to articles with pictures in the show notes at   But most importantly, this new shape has the amazing property that it has only one stable equilibrium position:  no matter what position you put it down in, it will roll around and right itself.

Now at first, you might not think this is very interesting.   When you were a child, you very likely played with a toy like that, a small egg-shaped doll, which would always pull itself upright no matter how you placed it.     These are called roly-poly toys in some places, but in the US during my childhood, they were marketed as “Weebles”, with the advertising slogan that “Weebles wobble but they don’t fall down”.   They worked because of a weighted bottom:  the bottom half of the toy weighed significantly more than the top.    The Gomboc, on the other hand, is made of a single material with uniform density, and miraculously still has this self-righting property.

Another interesting fact about the Gomboc is that, until a few decades ago, it was not obvious that such a shape should exist at all.   If you look at the 2-dimensional analog of the problem, trying to find a two-dimensional shape of uniform density that will always right itself when placed on a line that has a gravitational pull, there is no solution.    Any convex shape you create will have a least two stable equilibrium positions.   Domokos had originally set out to try to prove the 3-dimensional analog of this theorem, which would have demonstrated that nothing like the Gomboc could actually exist.   But after a conversation with a Russian mathematician named Vladimir Arnold, he realized that things were a bit different in three dimensions, and the theorem might not hold.

Domokos and Varkonyi then started working with computer modeling and trying to figure out what such a self-righting shape would look like.   At one point, Domokos came up with the idea that maybe nature had solved the problem already, and started experimenting with pebbles found at a beach, to see if any had naturally assumed a self-righting shape.   After checking over 2000 pebbles, he was disappointed.   Their breakthrough came when they started defining some key parameters of the shapes they were creating, called “flatness” and “thinness”, and realized they would have to minimize both together to come up with their desired shape.   

Continuing their computer modeling, but with these parameters in mind, they were finally able to start describing 3-D self-righting shapes.   The first one they came up with was very close to a sphere— disappointingly, so close that they could not manufacture one in practice.    Because it only differed from a true sphere by a factor of 10^-5, and even microscopic variations from their intended design would kill its self-righting property, attempts to manufacture it would just lead to the creation of ordinary spheres.   But by realizing that they could sacrifice smoothness, and allow some sharp edges between sphere-like segments, they were then able to come up with a more practical shape.  It’s still incredibly challenging to manufacture:   to get one you can hold in your hand, it has to be made with a precision down to the width of a human hair.    Apparently interest is now wide enough that they are being mass-produced, and can be ordered at the “gomboc shop” website.

Another interesting thing about this shape— Domokos realized afterwards that his insight about nature developing in this direction wasn’t totally off base.   He couldn’t find it in pebbles, because the tight margin for error meant that any Gomboc pebble would quickly wear away at one of its edges into a non-self-righting shape.   But evolution painstakingly comes up with precise designs over millions of years— and after carefully searching species of tortoises, he did find two with shell designs very close to his Gomboc.   A tortoise stuck on its back is very vulnerable, so it really does make sense that they would evolve an improved ability to right themselves as needed.

And this has been your math mutation for today.


Sunday, October 31, 2021

273: A Maze Of Labyrinths

 Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.

Mazes, or labyrinths, are probably the one type of somewhat mathematical game that nearly everyone is familiar with.   Books of activities for young children, at least in the US, often contain simple mazes, drawings with many different twisty passages leading from one side of the page to the other, with a challenge to trace the right path.   And when I was first learning to program computers while growing up, one of my favorite types of simple programs to write and experiment with were ones that generate and draw mazes and labyrinths.   As you are likely aware, these have been around in both literary and visual form since ancient times.   

Recently I was reading a book on this topic by a scholar named Penelope Doob, and she pointed out an intriguing mystery related to these constructs.   There are two major types of labyrinths:  unicursal ones, where there is ultimately only one path (though a very twisty one) that will eventually lead anyone entering to the goal; and multicursal ones, where you have to make choices along the way among many paths, most of which lead to dead ends.   The word “labyrinth” can refer to both, though these days “maze” usually refers to the multicursal type.   But here’s the strange thing:   in ancient and medieval sources, nearly every visual or physically built maze is unicursal or single-path, while nearly every literary description of labyrinths describes a multicursal or multiple-path one.   Why would we have this strange division?   Why weren’t people drawing and building the same labyrinths they described in their stories?

Let’s start by looking in more detail at unicursal labyrinths, the ones with only one path from beginning to end.   This one path is guaranteed to get anyone entering to their goal eventually, but it will take a lot longer than one might guess from the size of the labyrinth.    This is often seen as a spiritual metaphor:   the path to true faith or Enlightenment may be confusing and long, but as long as you remain steady and stick to the path, following the prescribed moral choices that move you forward, you will reach your goal.   Similarly, when laid into the ground or in a garden, it can be seen as a meditation aid, providing a simple path to slowly follow which doesn’t require any choices or decisions, and helping to free your mind of conscious thought.     Actually, that’s not fully true:  you do have the initial choice to enter the labyrinth or not, as well as the choice to turn back at any time.    But this also can work well with the metaphor of following the steady path to God:   you make the critical choice of whether to embark on the path, and at any moment along the way, you either maintain your faith or turn away.     As mentioned above, nearly all visually depicted labyrinths in ancient and medieval times are unicursal, with only a handful of exceptions.   Doob’s book shows some interesting examples dating back to stone carvings between 1800-1400 BC.

The multicursal type of labyrinth, where you have many choices to make along the way and can end up in a dead end or trap, is the kind most suited to games and puzzles.    If you’re a modern gamer, I’m sure you've wandered around an endless set of 3-D mazes in various videogames or in tabletop games like Dungeons & Dragons.   The literary precedents for this type of labyrinth go back to ancient times as well— who can forget the Greek legend of the Minotaur, who lived at the center of a confusing and dangerous labyrinth.   Theseus only managed to make it out alive because the princess Ariadne gave him some thread which he could slowly unwind to mark his path.   These labyrinths can also provide a slightly different metaphor for the path to religious enlightenment:  the constant presence of temptations to sin, which will lead you away from your goal, possibly forever.   Whether you succeed and make it to the goal or spend the rest of your life wandering in confusion, it’s due to your own choices.

But we should not forget that there are a number of elements that both types of labyrinths have in common.   When you look at either a unicursal or multicursal maze from above, they are a visually busy picture that’s hard to fully comprehend at a glance; in most cases, you can’t even be sure a labyrinth is of one type or the other without starting to carefully trace the path.  Both seem to trigger the same set of concepts in our brain, at least at first glance.  Also in both cases, because the maze twists back and forth many times within a small confined area, it’s very hard to know how far along the path you are:  while the unicursal maze might not provide false paths, it still conceals the total distance to the goal.   So they both have the same effects in attacking your self-confidence and triggering confusion,   These commonalities are strong enough that the ancients generally used the same language to describe both, variants of the words that led to our modern “labyrinth”.

Ultimately, these commonalities provide a key to understanding the strange issue of visually depicted unicursal mazes and literary multicursal ones.   As Doob describes it, “the best solution that can be found to the mystery is that classical and medieval eyes saw insufficient difference in the implications of the two models to warrant a new design.”   Unicursal mazes are somewhat simpler to draw or illustrate, since you don’t have to worry about accidentally failing to create a non-dead-end path to the goal   And it was very common for most art to be based somewhat on earlier art, so once drawings of unicursal labyrinths became common, those who needed to draw something similar followed the patterns set by their predecessors.    When artists were creating illustrations of labyrinths, they didn’t consult the corresponding literary works to check for an exact match.   On the other hand, one cannot deny that when telling stories, multicursal mazes are much more terrifying, providing the possibility of getting lost or confused forever without reaching your goal, so it makes a lot of sense that these would feature prominently in myths and legends.   

And this has been your math mutation for today.