Friday, July 30, 2021

271: Too Much Testing

 Audio Link


Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our new headquarters in Wichita, Kansas, this is Erik Seligman, your host.


Recently the infamous Elizabeth Holmes of Theranos has been in the news again, apparently filing new motions to delay her trial.   As you may recall, Theranos was the company that claimed to have developed powerful blood testing kits, which could run hundreds of standard medical tests at home on a single drop of blood.    It turned out that the invention just didn’t work, and Holmes was eventually charged with fraud as the company collapsed.    But not enough people have noticed that the lies about the science-fiction technology weren’t the only problems in Theranos’s basic concept.  We need to think about the flaws in their fundamental premise to “democratize” your health information.   This is the idea that average consumers should be encouraged to run lots of tests, for rare diseases or issues, on their own blood.    We especially need to pay attention now that numerous non-fraudulent companies, like the well-intentioned Everlywell, have entered this space.  


At first glance, the core concept sounds like an unmitigated benefit.    Why not let everyone run their own blood tests, without worrying about expensive doctors?    And there are good philosophical arguments why this should be allowed, as a matter of individual freedom, regardless of the mathematical issues I’m about to discuss.    (I won’t be getting into those arguments, as that’s beyond the scope of this podcast!)   But there is a key element of the math behind these tests that too many consumers are likely to overlook or be unaware of:   the fact that if a highly accurate test shows positive for an extremely rare disease, you probably DON’T actually suffer from that disease.

 

To make this more concrete, let’s assume there is a blood test which can, with 99% accuracy, determine if you suffer from the deadly virus of Math Madness, or MM; and in the general population, only one person out of every million has this disease.   You run the test, and it shows up positive.   You might intuitively think you are 99% likely to have MM.    However, let’s think about the total numbers here.   Out of every million people tested, only one has MM, due to its frequency in the population.   Yet with a 99% accurate test, 1% of the approximately 1 million healthy people, or 10,000 people, are going to incorrectly test positive.   So a given person who tests positive only has about a 1 in 10000 chance of carrying the disease.


How did our basic intuition fail us here?    The key problem is that we need to realize the conditional probability of A given B is quite different from the probability of B given A.   That 99% represents the probability of a positive test given that we have the disease, and the probability of a negative test given that we don’t have the disease.   But it doesn’t accurately measure the chance we have the disease given a positive test, the converse of what that 99% is about.   When we reverse the terms like that, we need to convert the probability using Bayes’ Theorem:  

P(A|B) = P(B|A)P(A)/P(B)

That P(A) term, or the prior probability of the condition being tested, is the key factor here that drastically cuts down the ultimate chance of having the disease.   For our MM example, that gives us .99 * (1/1000000)/(1/100), or approximately 1/10000.


Now you might point out that a false positive test is OK, as this is just an initial check to see if we should consult the doctor for more accurate testing and followup.   But the problem is that once the “easy” tests are out of the way, often much more intrusive, stressful, and life-altering testing and treatment is required.    This was brought home to me by an interesting poster I saw at my doctor’s office, provided by the US Preventative Services Task Force, on whether people under 70 should get PSA tests for prostate cancer.    Prostate cancer is an interesting case because, while deadly in the worst cases like all cancer, mild versions of it often do little harm and can be ignored.   The poster points out that out of every 1000 men given the PSA test, 1 death from prostate cancer will be prevented.   But:   240 of those men will initially test positive, and have to go through a painful biopsy.   Then 80 of them will, after testing positive at biopsy, go through long, painful (and unnecessary) courses of surgery or radiation treatment, after which 50 will permanently suffer erectile dysfunction, and 15 will suffer from permanent urinary incontinence.      So we’re 65x more likely to have really painful lifetime consequences than we are to save our life by taking the test.   It might still be worth it, but you really have to think hard.


Thus, ultimately, it often makes the most sense to avoid medical testing for rare conditions unless there is some overt symptom that causes your doctor to suspect an issue.    Otherwise the followup resulting from the test can actually provide many types of very negative patient outcomes.  This just naturally falls out of the common fallacy where people fail to apply Bayes’ Theorem, which requires that you factor in the prior probability of an event before you can properly interpret a test’s results.    Can average consumers be expected to understand these issues, and the reasons why running every possible test on your blood might not be the wisest course of action?   At the very least, I think companies entering this space should be very clear about the issue, and put up posters like the one at my doctor’s office, so their customers will approach the topic with their eyes open. 


And this has been your math mutation for today.  




References:  


https://www.uspreventiveservicestaskforce.org/Home/GetFileByID/3795 

https://en.wikipedia.org/wiki/Bayes%27_theorem

https://www.inc.com/christine-lagorio-chafkin/everlywell-democratizing-health-information.html 


Tuesday, June 15, 2021

270: Which Way To Turn

 Audio Link


Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our new headquarters in Wichita, Kansas, this is Erik Seligman, your host.


Apologies for the long gap since the last episode— as you just heard, we have been relocating Math Mutation headquarters across the country.   You can probably guess that this involves lots of details to handle.   We should soon be back on our almost-regular monthly schedule.


Anyway, with all the moving stuff going on, I recalled a topic I had been considering a while back.   It’s a pretty simple geometry question, yet one with a major (if subtle) effect on all our lives:  why are screws generally right-handed?    If you’ve ever had to screw something together, you probably remember the saying “lefty loosey righty tighty’, which reflects the common design of these basic and universal tools.   Screws are typically considered one of the basic six “simple machines”, along with inclined planes, levers, pulleys, wedges, and wheels.   So why are they always designed to rotate in one direction?


With a quick web search, it seems almost unanimous that this convention simply derives from the typical handedness of humans:  if you’re right-handed, then a right-handed screw is easiest to screw in.    It’s just as easy to manufacture screws in either direction, but when they first became standardized (with the Whitworth design in 1841), a single design became ubiquitious.   I guess my poor left-handed daughter will forever be a victim of this society-wide conspiracy.   


But the more surprising fact I discovered when researching this topic is that left-handed screw threads do exist, and are used for a variety of specialized applications.    Perhaps the most obvious is for situations where the natural rotation of an object would tend to loosen a right-handed screw:  for example, the left-side pedals on a bicycle, certain lug nuts on the left side of cars, or connections that secure other machine parts that are rotating the wrong way.      There are also cases where it’s useful to couple a left-handed and a right-handed connection, in a pipe fitting for example, so rotation in a single direction helps connect at both ends.    A less obvious usage is for safety:   in many applications involving flammable gas lines, the connections for the gas line use left-handed rather than right-handed threads, so nobody connects the wrong pipe by accident.   


On the other hand, a few of the uses described just seemed kind of silly, though I suppose they were for valid reasons.    In the early 20th century, many lamps used in subways were specially designed to have bulbs that screw in with left-handed threads, so nobody could steal them and use them at home.   It’s probably a sign of our society’s growing prosperity over the last century that most people don’t steal public light bulbs anymore.    More bafflingly, I found a few references to the use of left-hand threads in early ballpoint pens, to provide a “secret method” of disassembly.   I guess business meetings were a lot less boring back then— given the amount of idle fiddling I’ve typically done with my pens on a normal day at work, I can’t imagine that secret lasting very long.   


And this has been your math mutation for today.  




References:  

Sunday, April 25, 2021

269: A Good Use of Downtime

 Audio Link

The recent observation of Holocaust Remembrance Day reminded me of one of the stranger stories to come out of World War II.    John Kerrich was a South African mathematician who made the ill-fated decision to visit some family members in Denmark in 1940 just before the Germans invaded, and soon found himself interned in a prison camp.    We should point out that he was one of the luckier ones, as the Germans allowed the Danes to run their prison camps locally, and thus lived in extremely humane conditions compared to the majority of prisoners in German-occupied territories,   But being imprisoned still left him with many hours of time to fill over the course of the war.     Kerrich decided to fill this time by doing some experiments to demonstrate the laws of probability.


Kerrich’s main experiment was a very simple one:   he and a fellow prisoner, Eric Chirstensen, flipped a coin ten thousand times and recorded the results.    Now you might scratch your head in confusion when first hearing this— why would someone bother with such an experiment, when it’s so easy for anyone to do at home?    We need to keep in mind that back in 1940, the idea that everyone would have a computer at home (or, as we now do, in their pocket) that they could use for seemingly endless numbers of simulated coin flips, would have seemed like a crazy sci-fi fantasy.    Back then, most people had to manually engage in a physical coin flip or roll a die to generate a random number, a very tedious process.      Technically there were some advanced computers under development at the time that could do the simulation if programmed, but these were being run under highly classified conditions by major government entities.   So recording the value of ten thousand coin flips actually did seem like a useful contribution to math and science at the time.


So, what did Kerrich accomplish with his coin flips?    The main purpose was to demonstrate the Law of Large Numbers.  This is the theorem that says that if you perform an experiment a large number of times, the average result will asymptotically approach the expected value.    In other words, if you have a coin that has 50-50 odds of coming up heads or tails, if you perform lots of trials, you will over time get closer and closer to 50% heads and 50% tails.   Kerrich’s coins got precisely 5,067 heads, and over the course of the experiment got closer and closer to the 50-50 ratio, thus providing reasonable evidence for the Law.    (In any 10000 flips, there is about an 18% chance that we will be off by at least this amount from the precise 50-50 ratio, so this result is reasonable for a single trial.)   


Of course, it might make sense to request another trial of 10000 flips to confirm, for improved confidence in the result.   But apparently even in prison you don’t get bored enough for that— in his book, Kerrich wrote, “A way of answering the… question would be for the original experimenter to obtain a second sequence of 10000 spins,   Now it takes a long time to spin a coin 10000 times, and the man who did it objects strenuously to having to take the trouble of preparing further sets.”


Kerrich and Christensen also did some other experiments along similar lines.    By constructing a fake coin with wood and lead, they created a biased coin to flip, and over the course of many flips demonstrated 70/30 odds for the two sides.   This experiment was probably less interesting because, unlike a standard coin, there likely wasn’t a good way to estimate its expected probabilities before the flips.   A more interesting experiment was the demonstration of Bayes’ Theorem using colored ping-pong balls in a box.   This theorem, as you may recall, helps us calculate the probability of an event when you have some knowledge prior conditions that affect the likelihood of each outcome.    The simple coin flip experiment seems to be the one that has resonated the most with reporters on the Internet though, perhaps because it’s the easiest to understand for anyone without much math background.


In 1946, after the end of the war, Kerrich published his book, “An Experimental Introduction to the Theory of Probability.”.    Again, while it may seem silly these days to worry about publishing experimental confirmation of something so easy to simulate, and which has been theoretically proven on paper with very high confidence anyway, this really did seem like a useful contribution in the days before widespread computers.    The book seems intended for college math students seeking an introduction to probability, and in it Kerrich goes over many basics of the field as demonstrated by his simple experiments.    If you’re curious about the details and the graphs of Kerrich’s results, you can read the book online at openlibrary.org, or click the link in our show notes at mathmutation.com .   Overall, we have to give credit to Kerrich for managing to do something mathematically useful during his World War II imprisonment.    


And this has been your math mutation for today.




References:  


Saturday, March 20, 2021

268: The Right Way to Gamble

Audio Link

At some point, you’ve probably heard an urban legend like this:   someone walks into a Las Vegas casino with his life savings, converts it into chips, and bets it all on one spin of the roulette wheel.    In the version where he wins, he walks out very wealthy.   But, in the more likely scenario where he loses, he leaves totally ruined.    This isn’t just an urban legend, by the way, but it has actually happened numerous times— for example, in 2004, someone named Ashley Revell did this with $135,000, and ended up doubling his money in one spin at roulette.    I most recently read about this incident in a book called “Chancing it:  The Laws of Chance and How They Can Work for You”, by Robert Matthews.   And Matthews makes the intriguing point that if you are desperate to significantly increase your wealth ASAP, and want to maximize the chance of this happening, what Revell did might not be entirely irrational.


Now, before we get into the details, I want to make it clear that I’m not recommending casinos or condoning gambling.    Occasionally while on vacation my wife & I will visit a casino, and here’s my foolproof winning strategy.    First you walk in, let yourself take in the dazzling atmosphere of the flashing lights, sounds, and excitement.    Then walk over to the bar, plop down ten bucks or so, and buy yourself a tasty drink.   Sit down at the counter, take out your smartphone, start up the Kindle app, and read a good book.   (The Matthews book might be a nice choice, linked in the show notes at mathmutation.com.)    Then relax in the comfortable knowledge that you’re ahead of the casino by one pina colada, which you probably value more than the ten dollars at that particular time.   Don’t waste time attempting any of the actual casino games, which always have odds that fundamentally are designed to make you you lose your money.


Anyway, getting back to the Revell story, let’s think for a minute about those rigged odds in a casino.   Basically, the expected value of your winnings, or the probability of winning each value times the amount of money, is always negative.   So, for example, let’s look at betting red or black in roulette.  This might seem like a low-risk bet, since there are two colors, and the payout is 1:1.   When you look closely at the wheel, though, you’ll see that in addition to the 36 red or black numbers, there are 2 others, a green 0 and 00.   Thus, if you bet on red, your chances of winning aren’t 18/36, but 18/38, or about 47.37%.   That’s the sneaky way the casinos get their edge in this case.   As a result, your expected winnings for each dollar you bet are around negative 5.26 cents.  This means that if you play for a large number of games, you will probably suffer a net loss of a little over 5% of your money.


So let’s assume that Revell desperately needed to double his money in one day— perhaps his stockbroker had told him that if he didn’t have $200,000 by midnight, he would lose the chance to invest in the Math Mutation IPO, and he couldn’t bear the thought of missing out on such a cultural milestone.   Would it make more sense for him to divide his money into small bets, say $1000, and play roulette 135 times, or gamble it all at once?   Well, we know that betting it all at once gave about a 47% chance of doubling it— pretty good, almost 50-50 odds, even though the casino still has its slight edge.   But if he had bet it slowly over 100+ games, then the chances would be very high that his overall net winnings would be close to the expected value— so he would expect to lose about 5% of his money, even if he put his winnings in a separate pocket rather than gambling them away.    In other words, in order to quickly double his money in a casino, Revell’s single bold bet really was the most rational way to do it.


And this has been your math mutation for today.




References:  

https://www.amazon.com/dp/B014RT1M1U/








Sunday, February 7, 2021

267: Free Will Isn't Free

 Audio Link

If you’ve browsed the web sometime in the last three decades, especially if you visit any New Age or philosophy websites, you’ve probably come across the argument that quantum mechanics proves the existence of free will.   Superficially, this seems somewhat plausible, in that quantum mechanics blows away our traditional idea that we can fully understand the future behavior of the universe based on observable properties of its particles and forces.   Free will seems like a nice way to fill in the gap.   But when you think about it in slightly more detail, there seems to be a fatal flaw in this argument.   So let’s take a closer look.


To start with, what is so special about quantum mechanics?   This is the area of physics, developed in the 20th century, that tries to explain the behaviors we observe in subatomic particles.   What’s bizarre about it is that according to its calculations, as Einstein stated, God is playing dice with the universe:  we can calculate probabilities of the position and momentum of particles, but not the exact values until we observe them.     For example, suppose I am throwing a toy mouse for my cat to catch.  Since this is a macroscopic object, classical physics works fine:  my cat can take out a calculator, and based on the force I throw with, the mass of the mouse, and the effects of gravity, he will be able to figure out exactly where to pounce.     But now suppose I am throwing a photon for him to catch.   Even if he knows everything that is theoretically knowable about my throw, he will not be able to calculate exactly where it will land— he can only calculate a set of probabilities, and then figure out where the photon went by observing it.    Of course, my cat uses the observation technique even with macroscopic mice, so no doubt he has read a few books on quantum physics and is trying to use the most generally applicable hunting method.


But how does this lead to free will?   The most common argument is that since things happen in the universe that cannot be precisely calculated from all the known properties of its particles and energies, there must be another factor that determines what is happening.   Many physicists seem to believe that a hidden physical factor has been largely ruled out.   That means the missing piece, according to the free will argument, is likely to be human consciousness.    When the activities of the particles in our brain could determine multiple possible outcomes, it is our consciousness that chooses which one will actually happen.    The quantum events in my brain, for instance, could lead with equal probability to me recording a podcast this afternoon, or playing video games.   My free will is needed to make the final choice.


Now time for the fundamental flaw in this argument.    Suppose the quantum activities in my brain do ultimately give me a 50% chance of recording a podcast this afternoon, and a 50% chance of playing videogames.   The implicit assumption in the free will argument is that if the properties of my brain determined completely that I would make a podcast this afternoon, then I would not have free will.   Since there are two alternatives and I have to choose one, there is free will.    However, remember that the quantum calculations provided an exact probability for each event, not just a vague uncertainty, and these probabilities have been well-confirmed in the lab.   If I’m required to roll a 6-sided die, and record the podcast if numbers 1-3 come up, or play videogames if 4-6 comes up, aren’t I just as constrained as if I were only allowed one of those options?   I still don’t get any freedom here.    Either way, it’s not a question of my consciousness, it’s just another calculation, though one whose result cannot be predicted.   If I’m forced to do certain things with known probabilities, that’s the opposite of free will.


There is one more wrinkle here though, which may rescue the free will argument.   Suppose we interpret the quantum calculation slightly differently.   Yes, there is a 50% chance of me recording the podcast, and 50% of playing a videogame.   But maybe this isn’t the universe rolling a die— maybe it’s a measure of the type of personality created by the sum total of quantum interactions in my brain.   So rather than being forced to roll a virtual die and decide, the universe has just built me into the kind of guy who, given a choice, would have a 50-50 chance of podcasting or gaming this afternoon.    All the quantum calculations about my brain activity are just figuring out the type of personality that has been created by its construction.   That argument rescues free will in the presence of quantum probabilities.   Though it does put us in the odd position of wanting to ascribe some kind of conscious will to subatomic particles observed in a physics lab, or to assume some hidden spirits in the room are making decisions for them.


So what’s the answer here?    Well, I’m afraid that as often happens with these types of questions, we will not resolve it in a 5-minute podcast.    You can also find some more subtle arguments that incorporate other aspects of quantum physics, though those don’t look too convincing to me either.   If humanity truly doesn’t have free will, I’ll look forward to the day when someone writes a treatise on how a large Big Bang of hydrogen atoms fundamentally leads to Math Mutation podcasts a few billion years later.


And this has been your math mutation for today.




References:  







Friday, December 25, 2020

266: A Number is a Number

 Audio Link


You may recall that in our last episode we discussed the results of 19th-century attempts to rigorously define the concept of whole numbers.    These attempts culminated in the Peano axioms, a set of simple properties that defined numbers on the basis of the primitive concepts of zero and succession, plus a few related rules.   While this definition has its merits, Bertrand Russell pointed out that it also has some major flaws:  sets that we don’t think of as whole numbers, such as all numbers above 100, all even numbers, or all inverted powers of two, could also satisfy these axioms.   So, how did Russell propose to define numbers?


Here’s Russell’s definition:   “A number is anything which is the number of some class”.    Great, problem solved, we can end the podcast early today!   Or…  maybe not.   Let’s explore Russell’s concepts a bit, to figure out why this definition isn’t as circular as it seems.


The basic concept here is that we think of a whole number as a description of a class of sets, all sets which contain that number of elements.    Let’s take a look at the number 2.   The set of major US political parties, the set of Mars’s moons, and the set of my daughter’s cats are all described by the number two.   But how do we know this?   You might say we could just count the elements in each one to see they have the same number— but Russell points out that that would be cheating, since the concept of counting can only exist if we already know about whole numbers.    So what do we do?


The concept of 1-1 correspondence between sets comes to the rescue.    While we can’t count the elements of a set before we define whole numbers, we can describe similar sets:  a pair of sets are similar if their elements can be put into direct 1-1 correspondence, without any left out.   So despite lacking the intellectual power to count to 2, I can figure out that the number of my daughter’s kittens and the number of moons of Mars are the same:    I’ll map Mars’s moon Phobos to Harvey, and Mars’s moon Deimos to Freya, and see that there are no moons or cats left over.


Thus, we are now able to look at two sets and figure out if they belong in the same class of similar sets.   Russell defines the number of a class of sets as the class of all sets that are similar to it.   Personally, I think this would have been a bit clearer if Russell hadn’t chosen to overload the term ‘number’ here, using it twice with slightly different definitions.  So let’s call the class of similar sets a numerical grouping for clarity.   Then the definition we started with, “A number is anything which is the number of some class”, becomes “A number is anything which is the numerical grouping of some class”, which at least doesn’t sound quite as circular.    The wording gets a little tricky here, and I’m sure some Russell scholars might be offended at my attempt to clarify it, but I think the key concept is this:   A number is defined as a class of sets, all of which can be put into 1-1 correspondence with each other, and which, if we conventionally count them (not allowed in the definition, but used here for clarification), have that number of elements.


Is this more satisfying than the Peano axioms?    Well, if we identify zero with the empty set and the succession operation with adding an element to a set and finding its new number, we can see that those axioms are still satisfied.   Furthermore, this interpretation does seem to rule out the pathological examples Russell mentions:  the numbers greater than 100, even numbers, and inverted powers of two all fail to meet this set-based definition.    And Russell successfully used this definition as the basis for numerous further significant mathematical works.   On the other hand, Russell’s method was not the final word on the matter:   philosophers of mathematics continue to propose and debate alternate definitions of whole numbers to this day.   


Personally, I know a whole number when I see it, and maybe that’s all the definition a non-philosopher needs on a normal day.   But it’s nice to know there are people out there somewhere thinking hard about why this is true.


And this has been your math mutation for today.



References:  

Monday, November 30, 2020

265: Defining Numbers, Sort Of

 Audio Link

One of the amazing things about mathematics in general is the way we can continuously discover and prove new results on the basis of simple definitions and axioms.    But in a way, this concept is similar to those ancient myths that our planet is sitting on the back of a giant turtle.   It sits on another turtle, which sits on another, and it’s turtles all the way down.   Where’s the bottom?    In order to prove anything, we need to start from somewhere, right?    


For millennia after the dawn of mathematics, people generally assumed that the starting point had to be the whole numbers and their basic operations: addition, multiplication, etc.    How much simpler could you get than that?   But in the 19th century, philosophers and mathematicians began serious efforts at trying to improve the overall rigor of their endeavor, by defining simpler notions from which you could derive whole numbers and prove their basic properties.   The average person may not need proofs that whole numbers exist, but mathematicians can be a bit picky sometimes.   One of the most successful of these efforts was by Giuseppe Peano from Italy, who published his set of axioms in 1889.


The Peano axioms can be stated in several equivalent forms, but for the moment we’ll use the version in Bertrand Russell’s nice “Introduction to Mathematical Philosophy”.    As Russell states it, they are based on three primitive notions and 5 axioms.    The notions are the concepts of zero, number, and successor.   Note that he’s not saying we start by knowing what all the numbers are, just that we are assuming some set exists which we are calling “numbers”.  Based on these, the 5 axioms are:

  1. Zero is a number.
  2. The successor of any number is a number.
  3. No two numbers have the same successor.
  4. Zero is not the successor of any number.   (Remember that we’re just defining whole numbers here; negative numbers will potentially be a later extension to the system.)
  5. Induction works:   if some property P belongs to 0, and we can prove that if P is true for some number, it’s true for its successor, then P is true for all numbers.   


With these primitive notions, we can derive the existence of all the whole numbers, without having them known at the start.    For example, we know Zero has a successor which is a number, so let’s label that 1.   Then 1 has a successor number as well, so let’s call that 2, and so on.     We can also define the basic operations we’re used to, simply building on these axioms:  for example, let’s define addition.    We’ll create a plus operation, and define “a + 0” as equal to a for any number a.    Then we can define “a + the successor of b” as “the successor of (a+b)”.    So, for example, a + 1 equals a + “the successor of 0”, which becomes “the successor of a + 0”; by our original definition, this boils down to “the successor of a”.   Thus we have shown that the operation a+1 always leads to a’s successor, a good sign that we are using a reasonable definition of addition.     We can similarly define other operations such as multiplication and inequalities.


The Peano axioms were quite successful and useful, and a great influence on the progression of the foundations of mathematics.    Yet Russell points out that they had a few key flaws.   Think again about the ideas we started with:  zero, successors, and induction.   They certainly apply to the natural numbers… but could they apply to other things, that are not what we would think of as the set of natural numbers?     The answer is yes— there are numerous other sets that can satisfy the axioms.    


- One example:  let’s just define our “zero” for these axioms as the conventional whole number 100.   Then what is being described is the set of whole numbers above 100.   If you think about it for a minute, this won’t violate any of Peano’s axioms—  we are still defining a set of unique numbers with successors, none of which is below our “zero”, and to which induction applies.  

- As another example, let’s keep zero as our conventional zero, but define the “successor” operation as adding 2.   Now our axioms describe the set of even whole numbers.  A very useful set, indeed, but not the true set of whole numbers we were aiming to describe.

- As an even more absurd case, let’s define our “zero” as the conventional number 1, and the successor operation as division by 2.   Then we are describing the infinite progression 1, 1/2, 1/4, 1/8, and so on.    Another very useful series, but not at all matching our intention of describing the whole numbers.


Choosing alternate interpretations of this set, naturally, can lead to very weird interpretations of our derived operations such as addition and multiplication.   But Russell’s point is that despite the power and utility of Peano’s axioms, there is clearly something lacking.    This situation actually reminds me a bit of some challenges we encounter in my day job, proving that computer chip designs work correctly:   it’s nice if you can prove that your axioms lead to desired conclusions for your design, but you also need evidence that there aren’t also BAD designs that would satisfy those axioms equally well.   If such bad designs do exist, your job isn’t quite done.


Russell’s answer to this issue was to seek improved approaches to defining whole numbers based on set theory, which would more precisely correspond to our notion of what these numbers really are.    We’ll discuss this topic in a future podcast.    


And this has been your math mutation for today.



References: