Sunday, April 30, 2023

284: Don't Square Numbers, Triangle Them

 Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.


As I mentioned in the last episode, recently I read an unusual book, “A Fuller Explanation”, that attempts to explain the mathematical philosophy and ideas of Buckminster Fuller.  You probably know Fuller’s name due to his popularization of geodesic domes, those sphere-like structures built of a large number of connected triangles, which evenly distribute forces to make the dome self-supporting.   His unique approach to mathematics, “synergetics”, probably didn’t make enough true mathematical contributions to justify all the new terminology he created, but does look at many conventional ideas in different ways and through unusual philosophical lenses, which likely helped him to derive new and original ideas in architecture.   Today we’ll discuss another of his key points, his central focus on triangles as a key foundation of mathematics.


While most people who are conventionally educated tend to primarily think of geometry in terms of right angles and square coordinate systems, Fuller considered triangles to be a more fundamental element.   This follows directly from their essential ability to distribute forces without deforming.   Think about it for a minute:  if you apply pressure to the corner of a 4-sided figure, such as a square, its angles can be distorted to something different without changing the lengths of the sides:   for example, you can squish it into a parallelogram.   Triangles don’t have this flaw:  assuming the sides are rigid and well-fastened together, you can’t distort it at all.   This stems from the elementary geometric fact that the lengths of the three sides fully determine the triangle:  for any 3 side lengths, there is only one possible set of angles to go with them and make a triangle.  Thus, if you are building something in the real world, triangles and related 3-dimensional shapes naturally result in strong self-supporting structures.


According to one of the many legends of Fuller’s life, he first discovered the strength of this type of construction when he was a child in kindergarten.   His teacher had given the class an exercise where they were to build a small model house out of toothpicks and semi-dried peas.  At the time, Fuller’s parents had not yet discovered his need for glasses, so he was nearly blind and could not see what his classmates were doing at all.   As he recalled it, “All the other kids, the minute they were told to make structures, immediately tried to imitate houses.   I couldn’t see, so I felt.   And a triangle felt great!”  Basing his decisions on how the strength of the building felt as he was constructing it, he ended up developing a triangle-based complex of tetrahedra and octahedra.   The unusual look accompanied by the unexpected solidity and strength of his little house surprised his teachers and classmates, and many years later was patented by Fuller as the “Octet Truss”.    Fuller wasn’t the first architect to discover the strength of triangles, of course, but he put a lot more thought into extending them to useful 3-D structures than many had in the past.  


With the strong memory of such early discoveries, it’s probably not surprising that Fuller decided he wanted to build an entire mathematical system based on triangles.   He took this philosophy into all areas of mathematics that he looked at, sometimes to a bit of an extreme.   For example, a fundamental building block of algebra is looking at higher powers of numbers, starting with squaring.   But Fuller considered the operation of “triangling” much more important.  Of course this is a useful aspect of general algebra:  you might recall that the triangular numbers represent the number of units it takes to form an equilateral triangle:  start with 1 ball, add a row of 2 under it, add a row of 3 under that, etc.   So the series of triangular numbers start with  1, 3, 6, 10, and so on, representing the value ((n squared - n) / 2) for each n.   Fuller considered these numbers especially significant because they also happen to represent the set of pairwise relationships between n items.   For example, if you have 4 people and want to connect phone lines so any 2 can always talk to each other, you need 6 phone lines, the 3rd triangular number.   For 5 people you need 10, the 4th triangular number, etc.   Fuller was pleased that this abstract concept of the number of mutual relationships had such a simple geometric counterpart.   This observation helped to guide him in many of his further explorations related to constructing strong 3-dimensional patterns that could be used in construction.


So, are you convinced yet that triangles are important?   Of course these have been a fundamental element of mathematics for thousands of years, but it’s been relatively rare for people to view them and think about them exactly the way Fuller did.   That’s probably why, despite exploring areas of mathematics that had been relatively well-covered in the past, he still was able to make original and useful observations that led to significant practical results.   It’s a nice reminder that just because other people have already examined some area of mathematics, it doesn’t mean that there aren’t more fascinating discoveries waiting just under the surface to be uncovered.


And this has been your math mutation for today.

 


References:  




Sunday, February 26, 2023

283: Escape From Infinity

 Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.


Recently I read an interesting book, “A Fuller Explanation”, that attempts to explain the mathematical philosophy and ideas of Buckminster Fuller, written by a former student and colleague of his named Amy Edmondson.  Fuller is probably best known as the architect who popularized geodesic domes, those sphere-like structures built of a large number of connected triangles, which evenly distribute forces to make the dome self-supporting.   He had a unique approach to mathematics, defining a system of math, physics, and philosophy called “synergetics”, which coined a lot of new terms for what was essentially the geometry of 3-dimensional solids.   It’s a very difficult read, which is what led to the need for Edmonson’s book.


One of the basic insights guiding Fuller was the distrust of the casual use of infinities and infinitesimals throughout standard mathematics.   He would not accept the definition of infinitely small points or infinitely thin lines, as these cannot exist in the physical world.    He was also bothered by the infinite digits of pi, which are needed to understand a perfect sphere.   Pi isn’t equal to 3.14, or 3.142, or 3.14159, but keeps needing more digits forever.   If a soap bubble is spherical, how does nature know when to stop expanding the digits?    Of course, in real life, we know a soap bubble isn’t truly spherical, as it’s made of a finite number of connected atoms.    Or as Fuller would term it, it’s a “mesh of energy events interrelated by a network of tiny vectors.”   But can you really do mathematics without accepting the idea of infinity?


With a bit of googling, I was surprised to find that Fuller was not alone in his distrust of infinity:   there’s a school of mathematics, known as “finitism”, that does not accept any reasoning that involves the existence of infinite quantities.  At first, you might think this rules out many types of mathematics you learned in school.   Don’t we know that the infinite series 1/2 + 1/4 + 1/8 … sums up to 1?   And don’t we measure the areas of curves in calculus by adding up infinite numbers of infinitesimals?   


Actually, we don’t depend on infinity for these basic concepts as much as you might think.   If you look at the precise definitions used in many areas of modern mathematics, we are talking about limits— the convenient description of these things as infinites is really a shortcut.   For example, what are we really saying when we claim the infinite series 1/2+1/4+1/8+… adds up to 1?    What we are saying is that if you want to get a value close to 1 within any given margin, I can tell you a number of terms to add that will get you there.   If you want to get within 25% of 1, add the first two terms, 1/2+1/4.   If you want to get within 15%, you need to add the three terms 1/2+1/4+1/8.    And so on.   (This is essentially the famous “epsilon-delta” proof method you may remember from high school.).   Thus we are never really adding infinite sums, the ‘1’ is just a marker indicating a value we can get arbitrarily close to by adding enough terms. 


You might object that this doesn’t cover certain other critical uses of infinity:  for example, how would we handle Zeno’s paradoxes?    You may recall that one of Zeno’s classic paradoxes says that to cross a street, we must first go 1/2 way across, then 1/4 of the distance, and so on, adding to an infinite number of tasks to do.   Since we can’t do an infinite number of tasks in real life, motion is impossible.    The traditional way to resolve it is by saying that this infinite series 1/2+1/4+… adds up to 1.    But if a finitist says we can’t do an infinite number of things, are we stuck?   Actually no— since the finitist also denies infinitesimal quantities, there is some lower limit to how much we can subdivide the distance.   After a certain amount of dividing by 2, we reach some minimum allowable distance.   This is not as outlandish as it seems, since physics does give us the Planck Length, about 10^-35 meters, which some interpret as the pixel size of the universe.   If we have to stop dividing by 2 at some point, Zeno’s issue goes away, as we are now adding a finite (but very large) number of tiny steps which take  a correspondingly tiny time each to complete.    Calculating distances using the sum of an infinite series again becomes  just a limit-based approximation, not a true use of infinity.


Thus, you can perform most practical real-world mathematics without a true dependence on infinity.    One place where finitists do diverge from the rest of mathematics is in reasoning about infinity itself, or the various types of infinities.  You may recall, for example, Cantor’s ‘diagonal’ argument which proves that the infinity of real numbers is greater than the infinity of integers, leading to the idea of a hierarchy of infinities.   A finitist would consider this argument pointless, having no applicability to real life, even if it does logically follow from Cantor’s premises.


In Fuller’s case, this refusal to accept infinities had some positive results.    By focusing his attention on viewing spheres as a mesh of finite elements and balanced force vectors, this probably set him on the path of understanding geodesic domes, which became his major architectural accomplishment.     As Edmondson describes it, Fuller’s alternate ways of looking at standard areas of mathematics enabled him and his followers to circumvent previous rigid assumptions and open “rusty mental gates that block discovery”.    Fuller’s views were also full of other odd mathematical quirks, such as the idea that we should be “triangling” instead of “squaring” numbers when wanting to look at higher dimensions; maybe we will discuss those in a future episode.



And this has been your math mutation for today.

 


References:  




Tuesday, December 20, 2022

282: The Man With The Map

 Audio Link 

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.


I was surprised to read about the recent passing of Maurice Karnaugh, a pioneering mathematician and engineer from the early days of computing.   Karnaugh originally earned his PhD from Yale, and went on to a distinguished career at Bell Labs and IBM, also becoming an IEEE Fellow in 1976.   My surprise came from the fact that he was still alive so recently:  he was born in 1924, and his key contributions were in the 1950s and 1960s, so I had assumed he died years ago.   In any case, to honor his memory, I thought it might be fun to look at one of his key contributions:  the Karnaugh Map, known to generations of engineering students as the K-map for short.


So, what is a K-map?   Basically, it’s a way of depicting the definition of a Boolean function, that is a function that takes a bunch of inputs and generates an output, with all inputs and the output being a Boolean value, that is either 0 or 1.    As you probably know, such functions are fundamental to the design of computer chips and related devices.   When trying to design an electronic circuit schematic that implements such a function, you usually want to try to find a minimum set of basic logic gates, primarily AND, OR, and NOT gates, that defines it.   


For example, suppose you have a function that takes 4 inputs, A, B, C, and D, and outputs a 1 only if both A and B are true, or both C and D are true.   You can basically implement this with 3 gates:   (A AND B) , (C AND D), and an OR gate to look at those two results, outputting a 1 if either succeeded.   But often when defining such a function, you’re initially given a truth table, a table that lists every possible combination of inputs and the resulting output.   With 4 variables, the truth table would have 2^4, or 16, rows, 7 of which show an output of 1.   A naive translation of such a truth table directly to a circuit would result in one or more gates for every row of the table, so by default you would generate a much larger circuit than necessary.   The cool thing about a K-map is that even though it’s mathematically trivial— it actually just rewrites the 2^n lines of the truth table in a 2^(n/2) x 2^(n^2) square format— it makes a major difference in enabling humans to draw efficient schematics.


So how did Karnaugh help here?   The key insight of the K-map is to define a different shape for the truth table, one that conveys the same information, but in a way that the human eye can easily find a near-minimal set of gates that would implement the desired circuit.   First, we make the table two-dimensional, by grouping half the variables for the rows, and half for the columns.   So there would be one row for AB = 00,  one row for AB = 01, etc, and a column for CD=00, another for CD=01, etc.   This doesn’t actually change the amount of information:  for each row in the original truth table, there is now a (row, column) pair, leading to a corresponding entry in the two-dimensional K-map.   Instead of 16 rows, we now have 4 rows and 4 columns, specifying the outputs for the same 16 input combinations.


The second clever trick is to order each set of rows and columns according to a Gray code— that is, an ordering such that each pair of inputs only differs from the previous pair in one bit.   So rather than the conventional numerical ordering of 00, 01, 10, 11, corresponding to our ordinary base-10 of 0, 1, 2, 3 in order, we sort the rows as 00, 01, 11, 10.   These are out of order, but the fact that only one bit is changing at a time makes the combinations more convenient to visually analyze.


One you have created this two-dimensional truth table with the gray code ordering, it has the very nice property that if you can spot rectangular patterns, they correspond to boolean expressions, or minterms, that enable an efficient representation of the function in terms of logic gates.  In our example, we would see that the row for AB=11 contains a 4x1 rectangle of 1s, and the column of CD=11 contains a 1x4 rectangle of 1s, leading us directly to the (A AND B) OR (C AND D) solution.    Of course, the details are a bit messy to convey in an audio podcast, but you can see more involved illustrations in online sources like the Wikipedia page in the show notes.   But the most important point is that in this two-dimensional truth table, you can now generate a minimal-gate representation by spotting rectangles of 1s, greatly enhancing the efficiency of your circuit designs.   


Over the years, as with many things in computer science, K-maps have faded in significance.  This is because the power of our electronic design software has grown exponentially:  these days, virtually nobody hand-draws a K-map to minimize a circuit.  Circuit synthesis software directly looks at high-level definitions like truth tables, and does a much better job at coming up with minimal gate implementations than any person could do by hand.   Some of the techniques used by this software relate to K-maps, but of course many more complex algorithms, most of which could not be effectively executed without a computer, have been developed in the intervening decades.   Despite this, Karnaugh’s contribution was a critical enabler in the early days of computer chip design, and the K-map is still remembered by generations of mathematics and computer science students.


And this has been your math mutation for today.

 


References:  





Sunday, October 30, 2022

281: Pascal Vs Mathematics

 Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.


If you have enough interest in math to listen to this podcast, I’m pretty sure you’ll recognize the name of Blaise Pascal, the 17th-century French mathematician and physicist.   Among other achievements, he created Pascal’s Triangle, helped found probability theory, invented and manufactured the first major mechanical calculator, and made essential contributions to the development of fluid mechanics.   His name was eventually immortalized in the form of a computer language, a unit of pressure, a university in France, and an otter in the Animal Crossing videogame, among other things..   But did you know that in the final decade of his life, he essentially renounced the study of mathematics to concentrate on philosophy and theology?   


According to notes found after his death in 1662, Pascal had some kind of sudden religious experience in 1654, when he went into a trance for two hours, during which time he claims that God gave him some new insights on life.   At that point he dropped most of his friends and sold most of his possessions to give money to the poor, leaving himself barely able to afford food.   He also decided that comfort and happiness were immoral distractions, so started wearing an iron belt with interior spikes.   And, most disturbingly, he decided that math and science were no longer worthy of study, so he would devote all his time to religious philosophy.   He was still a very original and productive thinker, however, as in this period he wrote his great philosophical work known as the Pensees.   


There were a couple of reasons why he may have decided to give up on math and physics at this point.    Part of it was certainly just a change in emphasis:  he was concentrating on something else now, which he considered more important.   He also made comments about worldly studies being used to feed human egos, which means nothing in the eyes of God.   At one point he stated that he could barely remember what geometry was.


He never completely suppressed his earlier love of mathematics though.     Ironically, at several points in the Pensees he uses clearly mathematical ideas.    Most famously, you may have heard of “Pascal’s Wager”, where he discusses the expected returns of believing in God vs not believing, based on his ideas of probability theory.   You have two choices, to believe or not believe.   If you choose to believe, you may suffer a finite net loss from time spent on religion, if God doesn’t exist— but if he does, you have an infinite payoff.   Choosing not to believe offers at best the savings from that lifetime loss.   Thus, the rational choice to maximize your expected gain is to believe in and worship God.


As many have pointed out since then, there is at least one huge hole in Pascal’s argument:   what if you choose to believe in the wrong God, or worship him in the wrong way?   Many world religions consider heresy significantly worse than nonbelief.   He has an implicit assumption of Christianity, in the form he knows, being the only option other than agnosticism or atheism.   I think Homer Simpson once refuted Pascal’s Wager effectively, when trying to get out of going to church with his wife:  “But Marge, what if we chose the wrong religion? Each week we just make God madder and madder.”


Another surprising use of math in the Pensees is Pascal’s comments on why the study of math and science may be pointless in general.   He compares the finite knowledge that man may gain by these studies against the infinite knowledge of God:   “… what matters it that man should have a little more knowledge of the universe?  If he has it, he but gets a little higher.  Is he not always infinitely removed from the end…?   In comparison with these Infinites all finites are equal, and I see no reason for fixing our imagination on one more than on another.”   While he doesn’t write any equations here, the ideas clearly have a basis in his previous studies related to finite and infinite values.   We could even consider this a self-contradiction:  wouldn’t the fact that his math just gave him some theological insight mean that it was, in fact, worthy of study to get closer to God?


Pascal did also still engage a few times during this period in direct mathematical studies.   Most notably, in 1658, he started speculating on some properties of the cycloid, the curve traced by a point on a moving circle, to distract himself while suffering form a toothache.   When his ache got better, he took this as a sign from God to continue his work on this topic.   That excuse seems a bit thin to me:  clearly he never lost his inbuilt love for mathematics, even when he felt his theological speculations were pulling him in another direction.   


And this has been your math mutation for today.




References:  


Friday, August 26, 2022

280: Rubik's Resurgence

Audio Link 


Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.


Do you remember the Rubik’s Cube, that 3x3x3 cube puzzle of multicolored squares, with each side able to rotate independently, that was a fad in the 1980s?   The goal was to take a cube that has been scrambled by someone else with a few rotations, and get it back to a configuration where all squares on each side match in color.   Originally developed by Hungarian architecture professor Erno Rubik in 1975, he had initially intended it to model how a structure can be designed with parts that move independently yet still hold together.   The legend is that after he demonstrated the independent motion of a few sides and had trouble rearranging it back to the original configuration, he realized he had an interesting puzzle.   In the early 1980s, it started winning various awards for best toy or puzzle, and quickly became the best selling toy of all time.   (A title which it apparently still holds.)   It was insanely difficult for the average person to solve, though typically with some trial and error you could get one or two sides done, the key to holding people’s interest.    To get a sense of how popular it was back then, there were cube-solving guidebooks that sold millions of copies, and even a Saturday morning cartoon series about a cube-shaped superhero.   But by 1983 or so sales were dropping off, and the fad was considered over.


Like everyone else who was alive in the early 80s, I spent some time messing around with cubes, but found it too frustrating after a while, and eventually solved it with the aid of a guide book.   I remember being impressed by a classmate who swore he hadn’t read any guidebooks, but could take a scrambled cube from me, go work on it in a corner of the room, and come back with it fully solved.   His secret was to remove the colored stickers from the squares and put them back in the right configuration, without rotating the sides at all.   But the non-cheating solutions in the guidebooks typically revolve around identifying sequences of moves that can move around known sets of squares while keeping others in their current configuration, then getting the desired cubes in place layer by layer.   These sequences are tricky in that they appear to be completely scrambling the cube before restoring various parts, which is a key reason why average cubers would fail to discover them— you need to mess up your cube on the way to completing the solution.    It’s actually been mathematically proven that any reachable configuration can be solved in 20 moves or fewer.


One aspect of the cube that is always fun to mock is its marketing campaign.   Typically the cube was sold and advertised with the phrase, “Over 3 Billion Combinations!”   But if you think about it for a minute, there are a lot more.   A cube has 8 corner pieces, so you could combine those in 8 factorial (8!) ways, which is 8 * 7 * 6 …  down to 1.   And since each of these combinations can have each corner piece in 3 different rotations, you need to multiply by 3 to the 8th power.   Similarly, the 12 side pieces can be arranged in 12 factorial (12!) ways, then you need to multiply by 2 to the 12th power.    So to find the total possibilities, we need to multiply 8! * 3^8 * 12! * 2^12.    It turns out that only 1/12 of these positions are actually reachable from a starting solved state, so we need to divide the result by 12.   But we still get a total a bit larger than 3 billion:  4.3 * 10^19.   So the marketing campaign was underestimating the possibilities by a ridiculous amount— in fact, if you square the 3 billion that they gave, you still don’t quite reach the true number of cube configurations.   Perhaps they were afraid that the terms for larger numbers, such as the quintillions needed for the true number, would confuse the average customer.


I was surprised to read recently that Rubik’s Cube actually has actually gained in popularity again in the modern era, fueled by daring YouTubers who speed-solve cubes, solve them with their feet, and perform similar feats.     Lots of enthusiasts take their cubing very seriously, and there are Rubik’s Cube speed-solving championship events held regularly.   If you’re a professional-class cuber, you can buy specially made Rubik’s lubricants to enable you to rotate your sides faster.  The latest championship, this year in Toronto, included standard cube solving plus events on varying size cubes (up to 7x7x7), blindfolded cube solving, and one-handed cube solving.   (Apparently they eliminated the foot-only solving, though, concerned that it was unsanitary.) Champion Matty Inaba fully solved a 3x3x3 cube in 4.27 seconds, which sounds like about the time it usually takes me to rotate a side or two.     Author A.J. Jacobs, in “The Puzzler”, also points out that if you’re too intimidated to compete yourself, there are Fantasy Cubing leagues, similar to Fantasy Football, where you can bet on your favorite combination of winners.


So, what does the future hold for the sport of Rubik’s Cubing?   Well, even though the big leagues are only competing up to the 7x7x7 level this year, Jacobs tracked down a French inventor who has put together a 33x33x33 one.   As you would guess, the larger cubes are pretty challenging to build— this one involved over 6000 moving parts and ended up the size of a medicine ball.   Experienced cubers do say, however, that the basic algorithms for solving are fundamentally the same for all cube sizes.   Due to the home manufacturing capabilities enabled by modern 3-D printing, one of Jacobs’ interviewees points out that we are in a “golden age of weird twisty puzzles”.    Hobbyists have invented many Rubik’s like variants that are not perfect cubes, and have numerous asymmetric parts, to create some extra challenge.   Personally I’m not sure I would ever have the patience to deal with anything beyond a basic 3x3x3 cube, though maybe I’ll look into joining a fantasy league sometime.  


And this has been your math mutation for today.




References:  



Saturday, July 2, 2022

279: Improbable Envelopes

 Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.


Today we’re going to talk about a well-known paradox that a co-worker recently reminded me about, the Two Envelopes Paradox.   It’s similar to some others we have discussed in past episodes, such as the Monty Hall paradox, in that a slightly incorrect use of the laws of probability gives an apparent result that isn’t quite correct.


Here’s how the basic paradox goes.   You are shown two envelopes, and told that one contains twice as much money as the other one, with the choice of envelopes for each amount having been pre-determined by a secret coin flip.  No information is given as to the exact amount of money at stake.   You need to choose which envelope you want.   After you initially point at one, you are told the amount of money in it, and asked, “Would you like to switch to the other one?”   Since you have been presented no new information about which envelope has more money, it should be obvious that switching makes no difference at this point, as either way you have a 50% chance of having guessed the right one.


But let’s calculate the expected value of switching envelopes.   Say the envelope you chose contains D dollars.   If you stick with your current envelope, the expected amount you will get is simply D.  The other one contains either 2D or D/2 dollars, with 1/2 probability for each.   The expected value if you switch is then 1/2*2D + 1/2*D/2, which adds up to (5/4)D.    Thus your expected winnings if you switch are greater than the D you would gain from your first choice, and you should always switch!   But this doesn’t make much sense, if you had no additional information.   Can you spot the flaw in this reasoning?


The key is to recognize that you’re combining two different dollar amounts in your expected value calculation:  the D that exists in the case where you initially chose the smaller envelope is different from the D if you chose the larger one.   The easiest way to see this is if you define another variable, T, the total money in the combination of two envelopes.   In this case, the larger one contains (2/3)T, and the smaller has (1/3)T.   Now your expected winnings become the same as the expected value from switching, (1/2)*(2/3)T + (1/2)*(1/3)T, or simply T/2.  Alternatively, you could have come to the same conclusion by modifying our original reasoning using Bayes’ Theorem, replacing our reuse of D with correct calculations for the conditional values in each envelope.


But weirdly enough, brilliant.org seems to point out an odd strategy that will let you choose whether to switch with a greater than 50% chance of getting the higher envelope.    Here’s how it works.   After you choose your first envelope, choose a random value N using a normal probability distribution.   This is the common “bell curve”, with the important property that any number on the number line has a probability of being chosen, as the ‘tail’ of the bell asymptotically approaches and never reaches 0.   So if the center of the normal distribution is at 100, you have the highest probability of choosing a number near 100, but still a small chance of choosing 1000, and a really tiny chance of choosing 10 billion.   Therefore, if you’ve chosen a number N using this distribution, there is some probability, P, that N is between the dollar values in the two envelopes.   Now assume the envelope you didn’t choose contains that number N, and choose to switch on that basis:  if N is greater than the amount in the envelope you chose, you switch, and otherwise keep your original envelope.


Why does using this random number help?   Well, if your random number N was smaller than both envelopes, then you will always keep your first  choice, and there is no change to the overall 50-50 chance of winning.   If it was larger than both, you’ll always switch, and again no change.   But what if you got lucky, and N was between the two values, which we know can happen with nonzero probability P?   Then you will make the right choice:  you will switch if and only if your initial envelope was the smaller one!    Thus, with probability P, the chance of N being in this lucky interval, you will be guaranteed a win, while the rest of the time you have the original odds.   So the overall chance of winning is (1/2)*(1-P) + 1*P, which is slightly greater than 1/2.   


Something seems fishy about this reasoning, but I can’t spot an obvious error.   I remember also hearing about this solution from a professor back in grad school, and periodically tried searching the web for a refutation, but so far haven’t found one.   Of course, with no information about the actual value of P, this edge can be unpredictably small, so is probably of no real value in practical cases.    There also seems to be a philosophical challenge here:  How meaningful is an unpredictable bonus to your odds of an unknowable amount?   I’ll be interested to hear if any of you out there have some more insight into the problem, or pointers to further discussions of this bizarre probability trick.


And this has been your math mutation for today.




References:  

https://brilliant.org/wiki/two-envelope-paradox/ 






Sunday, May 22, 2022

278: Bicycle Repair Man

 Audio Link

With the arrival of summer weather in many parts of the U.S., it’s time to think again about various outdoor activities.    Watching a few bicycles pass by my house the other day brought to mind a famous anecdote about pioneering mathematician and computer scientist Alan Turing.   As you may recall, Turing was the famous British thinker who not only founded theoretical computer science, but also was the primarily visionary in the project to crack the German Enigma code, a key contribution to the Allied victory in World War II.   As often happens with such geniuses, his personal life was very odd, though he usually had reasons for whatever he did.   For example, he was mocked as ridiculously paranoid for chaining his favorite coffee mug to a radiator.  But there are rumors that years later, a pile of smashed coffee mugs was found near his old office, apparently thrown away by a disgruntled co-worker.    Another crazy story involves Turing’s efforts to repair his bicycle.


As the story goes, Turing noticed that every once in a while, the chain on his bike would come loose, and he would have to stop and kick it a certain way to get it back on its track again.   After a while, he noticed that the intervals at which this was happening seemed kind of regular, so decided to check that theory rigorously.   He attached a mechanical counter, and started measuring the exact interval at which this problem was occurring.    It turned out he was right— the intervals were regular.   The number of clicks between failures was proportional to the product of the number of spokes in his wheel, the number of links in the chain, and the cogs in the bicycle gears.   Once he discovered that, he took a close look at the components, and soon discovered the root cause:   there was a particular spoke that was slightly bent, and when this got too close to a particular link in the chain that was also damaged, the chain would be pulled off.   Armed with this knowledge, he was able to correctly fix the bent spoke and resolve the problem.    It’s been said that any competent bike repairman would have spotted the issue in a few minutes without bothering with counters and intervals, but the mathematics of that would be pretty boring.


Naturally, as with any anecdote about someone famous, there are some alternate versions of this story.   My favorite is the one that changes the ending slightly:  once Turing figured out the formula for when the chain would jump off, he started carefully calculating the intervals as he rode the bike, and stopping to kick it at the exact right times he calculated.   That’s a fun one, and certainly fits into the stereotypes of Turing’s eccentricity.   But I do find it a bit hard to believe.  When riding a bike outdoors, there are lots of variables involved to interrupt your concentration:  road obstacles, changing inclines, approaching cars, etc.   Could someone safely riding a bicycle successfully keep a running count of the wheel and chain rotations, over a continuous ride of several miles?   And in Turing’s case, it was further complicated by the fact that he always wore a gas mask as he rode, to prevent triggering his allergies.   But the alarm clock he was known to wear around his waist might have helped.


In honor of this story, there was actually a proposal back in 2015 to name a bicycle bridge in Cambridge, England after Turing.   I didn’t see any further references to this online, so it looks like it didn’t pass.   But there’s plenty of non-bicycle-related stuff named after him in the computer science world.    If you want someone to propose a bicycle bridge in your name, next time your bike breaks down, think about the clever mathematical tricks you might use to diagnose the issue.   Also, remember to found a branch of mathematics and win a world war.    Or forget about the bridge, and just take your bike to a competent repair shop.


And this has been your math mutation for today.




References: