Sunday, June 3, 2018

241: A Strange Fixation

Audio Link


Hi everyone.   Before we get started, I want to thank listener ‘katenmkate’ for posting another nice review in Apple Podcasts recently.   Thanks Kate!   By the way, other listeneres:  the podcast world is constantly growing more competitive, so a new positive review is still useful— please think about posting one if you like the podcast.   And the same comment goes for the Math Mutation book, which is available & reviewable on Amazon.   Also remember, if you like Math Mutation enough that you want to donate money, please make a donation to your favorite charity & email me about it, and I’ll mention you on the podcast for that as well.

Now for today’s topic.   Try the following experiment:   Take any 4-digit number and generate 2 values from it by arranging the digits in ascending and then descending order.   By the way, don’t choose a degenerate case where the number has all digits the same, like ‘2222’.   (For today I’ll pronounce 4-digit numbers just by saying the digits, to save time.)   Now subtract the smaller 4-digit number from the larger one.  You should find that you get another 4-digit number (or a 3-digit one, in which case you can add a leading 0.)   Repeat the process for a few steps.   Do you notice anything interesting?

For example, let’s start with 3524.  After we generate numbers by the orderings of the digits, we subtract 5432 - 2345, and get 3087.    Repeating the procedure, we subtract 8730-0378, and get 8352.   One more time, and we do 8532 - 2358, to get 6174.   So far this just seems like a weird procedure for generating arbitrary numbers, maybe a bit of a curiosity but nothing special.   But wait for the next step.   From 6174, we do 7641 - 1467, which gives us… 6174 again!   We have reached an attractor, or fixed point:   repeating our process always brings us back to this same number, 6174.   If you start with any other non-degenerate 4-digit number, you will find that you still get back to this same value, though the number of steps may vary.

This value, 6174, is known as Kaprekar’s Constant,  named after the Indian mathematician who discovered this strange property back in 1949.   Strangely, I was unable to find any mention of an elegant proof of why this number is special; it seems like it’s just one of those arbitrary coincidences of number theory.  It’s easy to prove the correctness of the Constant algebraically, but that really doesn’t provide much fundamental insight.   This constant does have some interesting additional properties though:   in the show notes, you’ll find a link to a fun website by a professor named Girish Arabale.    He uses a computer program to plot how many steps of Kaprekar’s operation it takes to reach the constant, indicating the step count by color, and generates some pretty-looking patterns.   The web page even showcases some music generated by this operation, though I still prefer David Bowie. 

We see more strange properties if we investigate different numbers of digits.   There is a corresponding constant for 3-digit operations, 495, but not with any higher digit counts.   If you try the same operation with 2, 5 or 6 digit numbers, for example, you will eventually settle into a small loop of numbers which generate each other, or a small set of possible self-generating constants, rather than a single arbitrary constant.    Let’s try 2 digits, starting arbitrarily at 90:      90 - 09 = 81.   81 - 18 = 63.  63 - 36 = 27.   72 - 27 = 45.  54 - 45 = 09— and you can see we’re now in a loop.   If we continue the operation, we will repeat  9, 81, 63, 27, 45, 9 forever!   You will see similar results if trying the operation in bases other than 10, where the meaning of the digits is slightly different and a result similar to Kaprekar’s doesn’t appear so easily.

One other aspect of Kaprekar’s Constant that we should point out is that this is really just a specific example of the general phenomenon of “attractors”.   If you take an arbitrary iterative operation, where you repeat some calculation that leads you back into the same space of numbers, it’s not that unusual to find that you eventually settle into a small loop, a small set of constants, or a fixed point like Kaprekar’s Constant.   I think his constant became famous because it’s described with a relatively simple operation, and was discovered relatively early in the history of popularization of the mathematical concept of attractors.    

Thinking about such things has actually become very important in some areas of computer science, such as the generation of pseudo-random numbers.   We often want to use a repeated mathematical procedure to generate numbers that look random to the consumer, but from the same start point, called a “seed”, will always generate the same numbers.   This is important in areas like simulation testing of engineering designs.   In my day job, for example, I might want to see how a proposed new computer chip would react to a set of several million random stimuli, but in case something strange happens, I want to easily reproduce the same set of supposedly “random” input later.     The typical solution is a pseudo-random number generator, where the previous number generates the next one through some function.

Kaprekar’s Constant helps demonstrate that if we choose an arbitrary mathematical procedure to repeatedly generate numbers from other ones, the results might not end up as arbitrary or random-looking as you would expect.   You can create some convoluted repetitive procedure that really has no sensible reason in its definition for creating a predictable pattern, like Kaprekar’s procedure, and suddenly find that it always leads you down an elegant path to a specific constant or a small loop of repeated results.   Thus, the designers of pseudo-random number generators must be very careful that their programs really do create random-seeming results, and are free of the hazards of attractors and fixed points like Kaprekar’s Constant.

And this has been your Math Mutation for today.


References:



Saturday, April 28, 2018

240: R.I.P. Little Twelvetoes

Audio Link

I was sad to hear of the recent passing of legendary jazz artist Bob Dorough, whose voice is probably still echoing in the minds of those of you who were children in the 1970s.    While Dorough composed many excellent jazz tunes— his “Comin’ Home Baby” is still on regular rotation on my iphone— he is most famous for his work on Schoolhouse Rock.   Schoolhouse Rock was a set of catchy music videos shown on TV in the 70s, helping to provide educational content for elementary-school-age children in the U.S.   Apparently three minutes of this educational content per hour would offset the mind-melting effects of the horrible children’s cartoons of the era.    But Dorough’s work on Schoolhouse Rock was truly top-notch:  the music really did help a generation of children to memorize their multiplication tables, as well as learning some basic facts about history, science, and grammar.   However, my favorite of the songs may have been the least effective, with lyrics that were mostly incomprehensible to kids who were not total math geeks.   I’m talking about the song related to the number 12, “Little Twelvetoes.”   I still chuckle every time that one comes up on my iphone.

Now, most of the songs in the Multiplication Rock series tried to relate their numbers to something concrete that the kids could latch on to.    For example, “Three is a Magic Number” talked about a family with a man, woman, & baby;  “The Four-Legged Zoo” talked about the many animals with four legs, and “Figure Eight” talked about ice skating.    But Dorough must have been smoking some of those interesting 1970’s drugs when he got to the number 12.  Instead of choosing something conventional like an egg carton or the grades in school, he envisioned an alien with 12 fingers and toes visiting Earth, and helping humans to do math.   OK, this was a bit odd, but I think it could have been made to work— but he did his best to pile further oddities on top of that, with lyrics especially designed to confuse the average young child.

First, the song spends a lot of time introducing the concept that the alien counts in base 12.   Here’s the actual narration:   

Now if man had been born with 6 fingers on each hand,  he'd also have 12 toes or so the theory goes. Well, with twelve digits, I mean fingers, he probably would have invented two more digits when he Invented his number system. Then, if he saved the zero for the end, he could count and multiply by twelve just as easily as you and I do by ten.
Now if man had been born with 6 fingers on each hand, he'd probably count: one, two, three, four, five, six, seven, eight, nine, dek, el, doh. "Dek" and "el" being two entirely new signs meaning ten and eleven.  Single digits!  And his twelve, "doh", would be written 1-0.
Get it? That'd be swell, for multiplying by 12.

For those of us who were really into this stuff, that concept was pretty cool.   But for the average elementary school student struggling to learn his regular base-10 multiplication tables, introducing another base and number system doesn’t seem like the best teaching technique.   To be fair, Dorough was just following the lead of the ill-fated “New Math” movement of the time, which I have mentioned in earlier episodes of this podcast.   A group of professors decided that kids would learn arithmetic better if teachers concentrated on the theoretical foundations of counting systems, rather than traditional drilling of arithmetic problems.   Thankfully, their mistake was eventually realized, though later educational fads haven’t been much better.

On top of pushing this already-confusing concept, the song introduced those strange new digits “dek and “el”, and a new word “doh” for 12.   While it’s true that we do need more digits when using a base above 10, real-life engineers who use higher bases, most often base-16 in computer-related fields, just use letters for the digits after 9:  A, B, C, etc.    That way we have familiar symbols in an easy-to-remember order.   I guess it’s fun to imagine new alien squiggles for the extra digits instead…  but I think that just makes the song even more confusing to a young child.   And why not just say “twelve” instead of a new word “doh” when we get to the base?  (Note, however, that this song predated Homer Simpson.)

But the part of the song I find truly hilarious is the refrain, which implies that having this alien around would help us to do arithmetic when multiplying by the number 12.   It goes, “If you help me with my twelves, I'll help you with your tens.  And we could all be friends.”   But think about this a minute.   It’s true that the alien who writes in base 12 could easily multiply by 12 by adding a ‘0’ to a number, just like we could do when multiplying by 10.   So suppose you ask Little Twelvetoes to multiply 9 times 12.   He would write down “9 0”.  Exactly how would this be helpful?   You would now have to convert the number written as 90 in base 12 to a base 10 number for you to be able to understand it, an operation at least as difficult as multiplying 9 times 12 would be in the first place!   So although this alien’s faster way of writing down answers to times-12 multiplication would be interesting, it would be of absolutely no help to a human doing math problems.    You could be friends with the alien, but your math results would just confuse each other.

Anyway, I should point out that despite these various absurdities, the part of the song that lays out the times-12 multiplication table is pretty catchy.   So if the kids could get past the rest of the confusing lyrics, it probably did still achieve its goal of helping them to learn multiplication.   And of course, despite the fact that I enjoy making fun of it, I and millions of kids like me did truly love this song— and still remember it over 40 years later.   Besides, this is just one of many brilliantly memorable tunes in the Schoolhouse Rock series.   I think Bob Dorough’s music and lyrics will continue to play in my iPhone rotation for many years to come.

And this has been your Math Mutation for today.

References:







Saturday, March 31, 2018

239: The Shape Of Our Knowledge

Audio Link

Recently I’ve been reading Umberto Eco’s essay collection titled “From the Tree to the Labyrinth”.   In it, he discusses the many attempts over history to cleanly organize and index the body of human knowledge.    We have a natural tendency to try to impose order on the large amount of miscellaneous stuff we know, for easy access and for later reference.   As typical with Eco, the book is equal parts fascinating insight, verbose pretentiousness, and meticulous historical detail.    But I do find it fun to think about the overall shape of human knowledge, and how our visions of it have changed over the years.

It seems like most people organizing a bunch of facts start out by trying to group them into a “tree”.   Mathematically, a tree is basically a structure that starts with a single node, which then links to sub-nodes, each of which links to sub-sub-nodes, and so on.   On paper, it looks more like a pyramid.   But essentially it’s the same concept as folders, subfolders, and sub-sub folders that you’re likely to use on your computer desktop.   For example, you might start with ‘living creatures’,   Under it you draw lines to ‘animals’, ‘plants’, and ‘fungi’.   Under the animals you might have nodes for ‘vertebrates’, ‘invertebrates’, etc.     Actually, living creatures are one of the few cases where nature provides a natural tree, corresponding to evolutionary history:  each species usually has a unique ancestor species that it evolved from, as well as possibly many descendants.

Attempts to create tree-like organizations date back at least as far as Aristotle, who tried to identify a set of rules for properly categorizing knowledge.   Later authors made numerous attempts to fully construct such catalogs.   In later times, Eco points out some truly hilarious (to modern eyes) attempts to create universal knowledge categories, such as Pedro Bermudo's 17th-century attempt to organize knowledge into exactly 44 categories.  While some, such as “elements”, “celestial entities”, and “intellectual entities” seem relatively reasonable to modern eyes, other categories include “jewels”, “army”, and “furnishings”.     Perhaps the inclusion of “furnishings” as a top-level category on par with “Celestial Entities” just shows us how limited human experience and knowledge typically was before modern times.

Of course, the more knowledge you have, the harder it is to cleanly fit into a tree, and the more logical connections you see that cut across the tree structure.   Thus our attempts to categorize knowledge have evolved more into what Eco calls a labyrinth, a huge collection with connections in every direction.  For example, wandering down the tree of species, you need to follow very different paths to reach a tarantula and a corn snake, one being an arachnid and the other a reptile.   Yet if you’re discussing possible caged parent-annoying pets with your 11-year old daughter, those two might actually be closely linked.    So our map of knowledge, or semantic network, would probably merit a dotted line between the two.     Thus, we don’t just traverse directly down the tree, but have many lateral links to follow, so Eco describes our real knowledge as more of a labyrinth.   He seems to prefer the vivid imagery of a medieval scholar wandering through a physical maze, but in a mathematical sense I think he is referring more to what we would call a ‘graph’, a huge collection of nodes with individual connections in arbitrary directions.

On the other hand, this labyrinthine nature of knowledge doesn’t negate the usefulness of tree structures— as humans, we have a natural need to organize into categories and subcategories to make sense of things.   Nowadays, we realize both the ‘tree’ and ‘labryrinth’ views of knowledge on the Internet.   As a tree, the internet consists of pages with subpages, sub-sub-pages, etc.   But a link on any page can lead to an arbitrary other page, not part of its own local hierarchy, whose knowledge is somehow related.   It’s almost too easy these days.   If you’re as old as me, you can probably recall your many hours poring through libraries researching papers back in high school and college.   You probably spent lots of time scanning vaguely related books to try to identify these labyrinth-like connections that were not directly visible through the ‘trees’ of the card catalog or Dewey Decimal system.

Although it’s very easy today to find lots of connections on the Internet, I think we still have a natural human fascination with discovering non-obvious cross connections between nodes of our knowledge trees.   A simple example is our amusement at puns, when we are suddenly surprised by an absurd connection due only to the coincidence of language.    Next time my daughter asks if she can get a tarantula for Christmas, I’ll tell her the restaurant only serves steak and turkey.    More seriously, finding fun and unexpected connections is one reason I enjoy researching this podcast, discussing obscure tangential links to the world of mathematics that are not often displayed in the usual trees of math knowledge.   Maybe that’s one of the reasons you like listening to this podcast, or at least consider it so absurd that it can be fun to mock.

And this has been your math mutation for today.


References:






Sunday, February 18, 2018

238: Programming Your Donkey

Audio Link

You have probably heard some form of the famous philosophical conundrum known as Buridan’s Ass.   While the popular name comes from a 14th century philosopher, it actually goes back as far as Aristotle.   One popular form of the paradox goes like this:   Suppose there is a donkey that wants to eat some food.   There are equally spaced and identical apples visible ahead to its left and right.   Since they are precisely equivalent  in both distance and quality, the donkey has no rational reason to turn towards one and not the other, so it will remain in the middle and starve to death.   

It seems that medieval philosophers spent quite a bit of time debating whether this paradox is evidence of free will.   After all, without the tie-breaking power of a living mind, how could the animal make a decision one way or the other?   Even if the donkey is allowed to make a random choice, the argument goes, it must use its living intuition to decide to make such a choice, since there is no rational way to choose one alternative over the other.  

You can probably think of several flaws in this argument, if you stop and think about it for a while.   Aristotle didn’t really think it posed a real conundrum when he mentioned it— he was making fun of sophist arguments that the Earth must be stationary because it is round and has equal forces operating on it in every direction.   Ironically, the case of balanced forces is one of the rare situations where the donkey analogy might be kind of useful:  in Newtonian physics, it is indeed the case that if forces are equal in every direction an object will stay still.    But medieval philosophers seem to have taken it more seriously, as a dilemma that might force us to accept some form of free will or intuition.  

I think my biggest problem with the whole idea of Buridan’s Ass as a philosophical conundrum is that it rests on a horribly restrictive concept of what is allowed in an algorithm.  By an algorithm, I mean a precise mathematical specification of a procedure to solve a problem.   There seems to be an implicit assumption in the so-called paradox that in any decision algorithm, if multiple choices are judged to be equally valid, the procedure must grind to a halt and wait for some form of biological intelligence to tell it what to do next.   But that’s totally wrong— anyone who has programmed modern computers knows that we have lots of flexibility in what we can specify.   Thus any conclusion about free will or intuition, from this paradox at least, is completely unjustified.   Perhaps philosophers in an age of primitive mathematics, centuries before computers were even conceived, can be forgiven for this oversight.

To make this clearer, let’s imagine that the donkey is robotic, and think about how we might program it.   For example, maybe the donkey is programmed to, whenever two decisions about movement are judged equal, simply choose the one on the right.   Alternatively, randomized algorithms, where an action is taken based on a random number, essentially flipping a virtual coin, are also perfectly fine in modern computing.    So another alternative is just to have the donkey choose a random number to break any ties in its decision process.    The important thing to realize here is that these are both basic, easily specifiable methods fully within the capabilities of any computers created over the past half century, not requiring any sort of free will.  They are fully rational and deterministic algorithms, but are far simpler than any human-like intelligence.   These procedures could certainly have evolved within the minds of any advanced  animal.

Famous computer scientist Leslie Lamport has an interesting take on this paradox, but I think he makes a similar mistake to the medieval philosophers, artificially restricting the possible algorithms allowed in our donkey’s programming.   For this model, assume the apples and donkey are on a number line, with one apple at position 0 and one at position 1, and the donkey in an arbitrary starting position s.   Let’s define a function F that describes the donkey’s position an hour from now, in terms of s.  F(0) is 0, since if he starts right at apple 0, there’s no reason to move.   Similarly, F(1) is 1.  Now, Lamport adds a premise:  the function the donkey uses to decide his final location must be continuous, corresponding to how he thinks naturally evolved algorithms should operate.   It’s well understood that if you have a continuous function where F(0) is 0, and F(1) is 1, then for any value v between them, there must be a point x where F(x) is v.   So, in other words, there must be points v where F(v) is not 0 or 1, indicating a way for the donkey to still be stuck between 0 and 1 and hour from now.      Since the choice of one hour was arbitrary, a similar argument works for any amount of time, and we are guaranteed to be infinitely stuck from certain starting points.   It’s an interesting take, and perhaps I’m not doing Lamport justice, but it seems to me that this is just a consequence of the unfair restriction that the function must be continuous.   I would expect precisely the opposite:   the function should have a discontinuous jump from 0 to 1 at the midpoint, with the value there determined by one of the donkey-programming methods I discussed before.

I did find one article online that described a scenario where this paradox might provide some food for thought though.   Think about a medical doctor, who is attempting to diagnose a patient based on a huge list of weighted factors, and is at a point where two different diseases are equally likely by all possible measurements.   Maybe the patient has virus 1, and maybe he has virus 2— but the medicines that would cure each one are fatal to those without that infection.   How can he make a decision on how to treat the patient?   I don’t think a patient would be too happy with either of the methods we suggested for the robot donkey:  arbitrarily biasing towards one decision, or flipping a coin.     On the other hand, we don’t know what goes on behind the closed doors after doctors leave the examining room to confer.   Based on TV, we might think they are always carrying on office romances, confronting racism, and consulting autistic colleagues, but maybe they are using some of our suggested algorithms as well.     In any case, if we assume the patient is guaranteed to die if untreated, is there really a better option?  In practice, doctors resolve such dilemmas by continually developing more and better tests, so the chance of truly being stuck becomes negligible.   But I’m glad I’m not in that line of work. 



And this has been your math mutation for today.

References:

Monday, January 15, 2018

237: A Skewed Perspective

Audio Link

If you’re a listener of this podcast, you’re probably aware of Einstein’s Theory of Relativity, and its strange consequences for objects traveling close to the speed of light.   In particular, such an object will appear to have its length shortened in the direction of motion, as measured from its rest frame.    It’s not a huge factor— where v is the object’s velocity and c is the speed of light, it’s the square root of 1 minus v squared over c squared.    At ordinary speeds we observe while traveling on Earth, the effect is so close to zero as to be invisible.    But for objects near the speed of light, it can get significant.    

A question we might ask is:  if some object traveling close to the speed of light passed you by, what would it look like?    To make this more concrete, let’s assume you’re standing at the side of the Autobahn with a souped-up camera that can take an instantaneous photo, and a Nissan Cube rushing down the road at .99c, 99% of the speed of light, is approaching from your left.   You take a photo as it passes by.   What would you see in the photo?   Due to length contraction, you might predict a side view of a somewhat shortened Cube.   But surprisingly, that expectation is wrong— what you would actually see is weirder than you think.   The length would be shorter, but the Cube would also appear to have rotated, as if it has started to turn left.

This is actually an optical illusion:   the Cube is still facing forward and traveling in its original direction.   The reason for this skewed appearance is a phenomenon known as Terrelll Rotation.    To understand this, we need to think carefully about the path a beam of light would take from each part of the Cube to the observer.   For example, let’s look at the left rear tail light.    At ordinary non-relativistic speeds, we wouldn’t be able to see this until the car had passed us, since the light would be physically blocked by the car— at such speeds, we can think of the speed of light as effectively infinite. Thus we would capture our usual side view in our photo.   But when the speed gets close to that of light, the time it takes for the light from each part to travel to the observer is significant compared to the speed of the car.  This means that when the car is a bit to your left, the contracted car will have moved just enough out of the way to actually let the light from the left rear tail light reach you.   This will arrive at the same time as light more recently emitted from the right rear tail light, and light from other parts of the back of the car that are in between.   In other words, due to the light coming from different parts of the car having started traveling at different times, you will be able to see an angled view of the entire rear of the car when you take your photo, and the car will appear to have rotated overall.   This is the Terrell Rotation.

I won’t go into the actual equations in this podcast, since they can be a bit hard to follow verbally, but there is a nice derivation & some illustrations linked in the show notes.   But I think the most fun fact about the Terrell Rotation is that physicists totally missed the concept for decades.   For half a century after Einstein published his theory, papers and texts claimed that if you photographed a cube passing by at relativistic speeds, you would simply see a contracted cube.    Nobody had bothered carefully thinking it through, and each author just repeated the examples they were used to.    This included some of the most brilliant physicists in our planet’s history!   There were some lesser-known physicists such as Anton Lampa who had figured it out, but they did not widely publicize their results.   It was not until 1959 that physicists James Terrell and Roger Penrose independently made the detailed calculation, and published widely-read papers on this rotation effect.    This is one of many examples showing the dangers of blindly repeating results from authoritative figures, rather than carefully thinking them through yourself.


And this has been your math mutation for today.


References: