Sunday, December 2, 2018

246: Election Soutions Revisited

Audio Link

Way back in podcast 172, I proposed a solution for the issue of contentious election recounts and legal battles over very close vote totals.   With all the anger, bitterness, and waste that handling these close-margin races causes, I thought it would be good to review this proposal again, and really give it some serious thought.

Our nation collectively spends millions of dollars during each election cycle on such issues, and the number of contentious races seems to have exploded in recent years.   One example is the recent Florida senate race, where over 8 million votes were cast, and the victory margin was within 10000, around an eighth of one percent.  Due to the increasing partisan divide, these close margins are now universally accompanied by accusations of small-scale cheating in local precincts or at the margins.   And we can’t deny that due to the all-or-nothing stakes, and the ability of a relatively tiny set of local votes to swing an election with national ramifications, the temptation for local partisans on both sides to cheat in small ways, if they spot an opportunity, must be overwhelming.   

Now a lot of people will shrug their shoulders and say these contentious battles and accompanying waste are an inevitable consequence of democracy.   But as I pointed out in the earlier podcast, if we think carefully, that’s not quite true.   The first key point to recognize, as pointed out by authors such as Charles Seife, is that there is a *margin of error* to the voting process.    Rather than saying that Rick Scott got 4,099,805 votes and Bill Nelson got 4,089,472 votes, it might make more sense just to say that each got approximately 4.1 million votes.    There are errors due to mishandled ballots, natural wear and tear, machine failures, honest mistakes, and even local small-scale cheating.   Once you admit there is a margin of error, then the silliness of recounts becomes apparent:  a recount is just a roll of the dice, introducing a different error into the count, with no real claim to be more precise or correct.   So under the current system, if you lose a close election, it would be foolish not to pour all the resources you have into forcing a recount.

But if we agree that the voting process has a margin of error, this leads to a natural solution, suggested by Seife:   if the votes are close enough, let’s just agree to forsake all the legal battles, recounts, and bitter accusations, and flip a coin.   It would be just as accurate— just as likely to reflect the true will of the people— and a much cheaper, faster, and amicable method of resolution.   But if you think carefully, you will realize that this solution alone doesn’t totally solve the issue.   Now we will repeat the previous battles wherever the vote is just over the margin that would trigger a coin flip.    For example, if we said that the victory margin had to be 51% or greater in a 1000-vote race to avoid a coin flip, and the initial count gave the victor precisely 510 votes, there would be a huge legal battle by the loser to try to shave off just one vote and trigger his random shot.

This leads to my variant of Seife’s proposal:  let’s modify the system so that there is *always* a random element added to the election, with odds that vary according to the initial vote count.   We will use a continuous bell curve, with its peak in the middle, to determine the probability of overturning the initial result.   In the middle, it would be a 50% probability, or a simple coin flip.   In our 1000-vote election, at the 510 vote mark the probability would be very close to 50-50, but a smidgen higher, depending on how we configure the curve, something like 51%.    Now the difference between a 50% chance of winning and a 49%, or 51%, will probably not seem very significant to either candidate:  rather than fighting a legal battle over the margin, they will probably want to go ahead, generate the random number, and be done with it.    Of course we will agree that once the random die is cast, with the agreement of all parties to the election, both winners and losers accept the result without future accusations or legal battles.   Due to the continuity of the curve, there will never be a case where a tiny vote margin will seem to create life-or-death stakes.   The probability of overturning the election will fall smoothly as the margin increases, until it gets down to approximately zero as the vote results approach 100%.   So Kim Jong Un would still be safe.

You can probably see the natural objection to this scheme: as you move further out from the center of the curve, having *any* possibility of overriding the result starts to seem rather undemocratic.   Do we really want to have a small chance of overturning the election of a victor who got 60% of the vote?   If this system is implemented widely, such a low-probability result probably will happen somewhere at some point.   But think about all the other random factors that can affect an election:  a sudden terrorist attack, mass layoff at a local company, random arrest of a peripheral campaign figure, a lurid tabloid story from a prostitute, a sudden revelation of Math Mutation’s endorsement—- the truth is, due to arbitrarily timed world events, there is always a random factor in elections to some degree.   This is just a slightly more explicit case.   Is it really that much less fair?    And again, think of the benefits:  in addition to saving the millions of dollars spent on legal battles and recounts, the reduction in the need for bitter partisan battles in every local precinct on close elections has got to be better for our body politic.

So, what do you think?    Is it time for our politicians to consider truly out-of-the-box solutions to heal our system?   Maybe if all the Math Mutation listeners got together, we could convince a secretary of state somewhere to try this system out.   Of course, I know I’m probably just dreaming, outright nuclear war in Broward County, Florida is a much more likely solution to this issue.   At least I’m located pretty far outside the fallout zone for that one.

And this has been your math mutation for today.


References:




Sunday, October 7, 2018

245: How Far Apart Are Numbers?

Audio Link

When you draw a number line, say representing the numbers 1 through 10, how far apart do you space the numbers?    You might have trouble even comprehending the question,   If you’ve been educated in any modern school system in a developed country, you would probably think it’s obvious that the numbers are naturally placed at evenly spaced intervals along the line.    But is this method natural, or does it simply reflect what we have been taught?    In fact, if you look at studies of people from primitive societies, or American kindergarten students who haven’t been taught much math yet, they do things slightly differently.   When asked to draw a number line, they put a lot of space between the earlier numbers, and then less and less space for each successive one, with high numbers crowded together near the end.    As you’ll see in the show notes, ethnographers have found similar results when dealing with primitive Amazon tribesmen.  Could this odd scaling be just as ‘natural’ as our evenly spaced number line?

The simplest explanation for this observation might be that less educated people, due to their unfamiliarity with the task, simply don’t plan ahead.   They start out using lots of space, and are forced to squish in the later numbers closer together simply because they neglected to leave enough room.    But if you’re a fellow math geek, you have probably recognized that they could in fact be drawing a logarithmic scale, where linear intervals are proportional to ratios rather than quantities.   So for example, the space between 1 and 2 is roughly the same as the space from 2 to 4, from 4 to 8, etc.    This scale has many important scientific applications, such as its use in constructing a slide rule, the old-fashioned device for performing complex calculations in the days before electronic calculators.    Is it possible that humans have some inborn tendency to think in logarithmic scales rather than linear ones, and we somehow unlearn that during our early education?

Actually, this idea isn’t as crazy as it might sound.   If you think about it, suppose you are a primitive hunter-gatherer in the forest gathering berries, and each of your children puts a pile of their berries in front of you.   You need to decide which of your children gets an extra hunk of that giant platypus you killed today for dinner.    If you want to actually count the berries in each pile, that might take a very long time, especially if your society hasn’t yet invented a place-value system or names for large numbers.    However, to spot that one pile is roughly twice or three times the size of the other can probably be done visually.    In practice, quickly estimating the ratio between two quantities is often much more efficient than trying to actually count items, especially when the numbers involved are large.   So, thinking in ratios, which leads to a logarithmic scale, could very well be perfectly natural, and developing this sense may have been a useful survival trait for early humans.     As written by Dehaene et al, in their study linked in the show notes, “In the final analysis, the logarithmic code may have been selected during evolution for its compactness: like an engineer’s slide rule, a log scale provides a compact neural representation of several orders of magnitude with fixed relative precision.”

The same study notes that even after American children learn the so-called “correct” way to create a number line for small numbers, for a few years in elementary school they still tend to draw a logarithmic view when asked about higher numbers, in the thousands for example.   But eventually their brains are reprogrammed by our educational system, and they learn that all number lines are “supposed” to be drawn in the linear, equally-spaced way.    However, even in adults, additional studies have shown that if the task is made more abstract, for example using collections of dots too large to count or using sound sequences, a logarithmic type of comparison can be triggered.    So, it looks like we do not truly lose this inherent logarithmic sense, but are taught to override it in certain contexts. 
   
I wonder if mathematical education could be improved by taking both views into account from the beginning?   It seems like we might benefit from harnessing this innate logarithmic sense, rather than hiding it until much more advanced levels of math are reached in school.   On the other hand, I could easily imagine young children getting very confused by having to learn to draw two types of number lines depending on the context.    As with everything in math education, there really is no simple answer.

And this has been your math mutation for today.

References:






Saturday, September 8, 2018

244: Is Music Just Numbers?

Audio Link


I was surprised recently to hear an advertisement offering vinyl records of modern music for audio enthusiasts.   In case you were born in recent decades, music used to be stored on these round black discs called “records” in an analog format.    This means that the discs were inscribed with grooves that directly represent the continuous sound waves of the music.   In contrast, the vast majority of the music we listen to these days is digital:   the sounds have been boiled down to a bunch of numbers, which are then decoded by our iPods or other devices to reproduce the original music.    I’ve never noticed a problem with this, but certain hobbyists claim that there is no way a digital reproduction of music can be as faithful as an analog one.   Do they have a point?

The key issue here is the “sampling rate”.   As you would probably guess, there is no way a series of numbers can represent a truly continuous music wave— they are representing sounds at a bunch of discrete points in time, which are played quickly right after each other to produce the illusion of continuity.    These rates are too fast for most of us to discern:  the compact disc standard, for example, is a sampling rate of 44.1 KHz, or about 44,100 samples per second.   This is far more than generally accepted estimates of what our ears can perceive.   It also meets the criteria calculated by Swedish-American engineer Harry Nyquist in the early 20th century, showing that this sampling rate is more than sufficient to faithfully reproduce all sounds within the range of human hearing.    But the small group of audiophiles who insist on listening to vinyl claim they can hear the difference.

These analog boosters of vinyl seem to be relatively vocal in online communities.  They claim that there are subtle effects that make analog-reproduced music seem “brighter”, that digital has a “hard edge”, or other vague criticisms.    There are also claims that the analog method provides an “emotional connection” to the original artist that is destroyed by digital.   The most overwrought of them claim they are saving the world, with statements like, “future generations will be sad to realize that we didn’t preserve the majority of our music, we just made approximate digital records of it.”   Some of them do admit that part of their enthusiasm comes from the rituals of analog music:  leafing through records in a store, carefully pulling it out of the sleeve and placing the needle, etc.  

You can think of audio sampling rates kind of like pixels in computer graphics, the small blocks that make up the images we see on any modern computer screen.  in old computers from the 1980s, you could clearly see the pixels, since computer memory was expensive and the resolution, or density of the pixels, rather poor.    But these days, even professional photographers use digital equipment to capture and edit photos.    If you have purchased wedding pictures or something similar from a digitally-equipped professional, I doubt you have looked at the pictures and noticed anything missing.     I’ve seen this lead to a few arguments with some of my snobbier friends about the need for art museums:   since we can see high-resolution digital reproductions of just about any classic art online, why do we need to walk to an old building and stare at the painstakingly preserved original canvases?    I think the “museum ritual” is kind of similar to the vinyl ritual:  it evokes emotional memories and nostalgia for a past history, but isn’t really necessary if you think it through.

Getting back to the audio question, what we would really like to see are double-blind studies challenging audiophiles to distinguish analog vs digital music without knowing its source.   I was surprised not to see any online— perhaps the logistics of such a study would be challenging, since you would have to use completely non-digital means to transmit the sound from both sources to the listening room, and avoid telltale giveaways like the need to flip records after a while.   But in 2007 the Boston Audio Society did something pretty close, asking a bunch of volunteer audiophiles in a bunch of genres to try to distinguish recordings played at a standard CD sampling rate, 44.1 KHz, vs high-resolution formats recorded in formats like SACD, which sample at around 50 times higher rates.   They would take the high-resolution recordings, and transfer them back to CD format for the experiment, so they were comparing the exact same music at the two sampling rates.   Surely if sounds lost at the standards CD sampling mattered, these improved formats should give much better sound, right?    

As you have probably guessed, their results showed that the massively higher sampling rate made essentially no difference— the volunteer audiophiles could not distinguish the high-resolution and CD-quality recordings any better than chance would predict, at 49.82%.    So, it looks like the human ear really can’t distinguish sampling rates beyond what a CD provides, which after all is not a surprise in relation to the Nyquist calculations we mentioned earlier.      Of course, some New Age types will continue to claim a mystical emotional connection that can only be transmitted through analog, and I’ll have to let them debate that with their spirit guides.    But if you look at the science, you don’t need to comb the garage sales for old record players and unscratched vinyl—digital music should work just fine.

And this has been your math mutation for today.




References:



Sunday, July 29, 2018

243: The Consciousness Continuum

Audio Link


Most of us think of consciousness as a kind of division between several states.   We can be awake, unconscious, or maybe somewhat drowsy on the boundary between the two, and occasionally interact with our out-of-reach “subconscious” without realizing it.   But is there more to the concept?   In his book “The Tides of Mind”, Yale computer scientist & cognitive researcher David Gelernter makes the case that consciousness is more of a continuum:   at any given moment, you are at some point on a spectrum ranging from pure thinking to pure feeling.    You  move up and down the spectrum depending on how focused and logical your thoughts are and how connected with the outside world.  At the bottom, you retreat inward into your mind, and fall asleep.    

So, what does this have to do with mathematics?    Gelernter points out a famous quote by physicist Eugene Wigner about John Von Neumann, who is widely acknowledged as one of the greatest mathematicians of the 20th century:  “Whenever I talked with Von Neumann, I always had the impression that only he was fully awake”.    Now of course, we need to take Gelernter’s use of this quote with a grain of salt, since Wigner was probably speaking in a colloquial sense and not aware of this modern theory.    But he does point out that in this spectrum theory, logical thinking and reasoning is at the very top, indeed the most ‘awake’ portion of that spectrum.   And this might also explain why mathematics tends to be more challenging to the average person than many other human pursuits:   it requires that you keep focused attention, avoiding any temptation towards reminiscence, daydreaming, or emotion.   And if you fall asleep, you probably won’t get any math done.

Of course, there are certain other types of genius that can occur when someone has exceptional abilities farther down the spectrum.   Gelernter also points out the example of Napolean, who was skilled at stirring the emotions of his followers, and could draw vast quantities of past military experiences from his memory to guide his plans and policies.   As a young officer Napoleon is said to have claimed, “I do a thousand projects every night as I fall asleep”.   In other words, in his semi-conscious state near the bottom of the spectrum, he could easily retrieve various scenarios from his memory, varying them and playing with them creatively to discover different ways the next day’s battle might play out.  

The position of emotions and memories in this theory is also somewhat strange.   Gelernter writes, “Emotion grows increasingly prominent as reflective thinking fades and the brightness of memories grows— and by not creating memories, we unmake our experience as it happens”.   In other words, what others might call the subconscious is simply your internal array of memories; rather than having a separate subconscious mind, the illusion of a subconscious is just what results when you are low on the spectrum and increasingly drawing from your memories instead of logically observing the outside world.   Emotions are our internal summaries of the flavor of a set of memories, which become increasingly prominent as we are lower on the spectrum.

One interesting consequence of this theory, in Gelernter’s view, is that “computationalism”, the idea that the human mind might be equivalent to some advanced computer, is fundamentally wrong.   Due to our reliance on the capabilities of the mind to analyze memories in an unfocused way and generate emotions, no computer could replicate this state of being.   I may not be doing the theory justice, but I don’t find this argument very convincing.    Ironically, it might be said that he’s assuming a computer model of a human mind is restricted to a “Von Neumann architecture”, the type of computers that most of us have today, and which seem to directly implement the logical, mathematical thought that occurs at the top of our spectrum.   But there are many alternative types of computers that have been theorized.  In fact,  currently there is explosive growth happening in “neural network” computers, inspired by the design of a human brain.   While I would tend to agree with Gelernter that the spectrum of consciousness would be very hard to model in a Von Neumann architecture, I would still bet that some brain-like computing device will one day be able to do everything a human brain can do.   On the other hand, that might just be a daydream  resulting from my down-spectrum lack of logical thinking at the moment.

And this has been your math mutation for today.

References:


Sunday, June 24, 2018

242: Effort and Impact

Audio Link


I’m always amused by management classes and papers that take some trivial and obvious mathematical model, give it a fancy name and some imprecise graphs, and proudly proclaim their brilliant insights.    Now of course, such models may be a useful tool to help think through daily issues, and should be used implicitly as a simple matter of common sense, but it’s really important to recognize their limitations.   Many limitations come from the imprecision of the math involved, human psychological factors, or the difficulty of translating the many constraints of a real-life situation into a few simple parameters of a model.    I was recently reminded of a classic example of this pitfall, “Impact-Effort Analysis”, when reading a nice online article by my former co-worker Brent Logan.   While the core concepts of impact-effort analysis are straightforward, it is critical to recognize the limitations of trying to apply this simple math to real-life problems.

The basic concept of impact-effort analysis is that, when you are trying to decide what tasks to work on, you should create a list and assign to each an “impact”, the value it will bring to you in the end, and an “effort”, the cost of carrying out the task.   You then graph all the tasks on an impact vs effort graph, which you divide into four quadrants, and see which quadrant each falls into.   For example, you may be considering ways to promote your new math podcast.   (By the way, this is just a hypothetical, who would dare compete with Math Mutation?)     A multimedia ad campaign would be high effort but high impact., the quadrant known as a “big bet”.   Posting a note to your 12 friends on your personal Facebook page would be low effort and low impact, which we label as “incremental”.     Hiring a fleet of joggers to run across the country wearing signboards with your URL would probably be high effort and low impact, which we label the “money pit” quadrant.    And finally, using your personal connections with Weird Al Yancovic to have him write a new hit single satirizing your podcast is low effort and high impact— this is in the ideal quadrant, the “easy wins”.   (And no, I don’t really have such connections, sorry!)

This sounds like a sensible and straightforward idea— who could argue with rating your tasks this way in order to prioritize?    Well, that is true:  if you can correctly assign impact and effort values, this method is very rational.    But Logan points out that out old friends, the cognitive biases, play a huge role in how we come up with these numbers in real life.    

On the side of effort estimation, psychological researchers have consistently found that small teams underestimate the time it takes to complete a task.   Kahneman and Tversky have labeled this the “Planning Fallacy”.   Factors influencing this include simple optimism, failing to anticipate random events, overhead of communication, and similar issues.   There is also the question of dealing with constant pressure of bosses who want aggressive-looking schedules to impress their superiors.     You may have heard the well-known software engineering saying that stems from this bias:   “The first 90% of the project takes the first 90% of the time, and the last 10% of the project takes the other 90% of the time.”

Similarly, on the impact side, it’s very easy to overestimate:  when you get excited about something you’re doing, especially when the results cannot be measured in advance.    In 2003, Kahneman and Lovallo extended their definition of the Planning Fallacy to include this aspect as well.  I find this one pretty easy to believe, looking at the way things tend to work at most companies.    If you can claim you, quote, “provided leadership” by getting the team to do something, you’re first in line for raises, promotions, etc.   Somehow I’ve never seen my department’s VP bring someone up to the front of an assembly and give them the “Worry Wart Award” and promotion for correctly pointing out that one of his ideas was a dumb waste of time.  If your company does that effectively, I may be sending them a resume.

Then there are yet more complications to the impact-effort model.   If you think carefully about it, you realize that it has one more limitation, as pointed out in Logan’s article:   impact ratings need to include negative numbers.   That fleet of podcast-promoting joggers, for example, might cost more than a decade of profits from the podcast, despite the fact that the brilliant idea earned you a raise.    Thus, in addition to the four quadrants mentioned earlier, we need a category of “loss generators”, a new pseudo-quadrant at the bottom covering anything where the impact is below zero.

So what do we do about this?    Well, due to these tendencies to misestimate, and the danger of loss generators, we need to realize that the four quadrants of our effort-impact graph are not homogeneous or of equal size.    We need to have a very high threshold for estimated impact, and a low threshold for estimated effort, to compensate for these biases— so the “money pit” quadrant becomes a gigantic one, the “easy win” a tiny one, and the other two proportionally reduced.    Unfortunately, it’s really hard to compensate in this way with this type of model, since human judgement calls are so big a factor in the estimates.    You might subconsciously change your estimates to compensate.   Maybe if you had your heart set on that fleet of joggers, you would decide that its high impact should be taken as a given, so you rate it just enough to keep it out of the money pit.

Logan provides a few suggestions for making this type of analysis a bit less risk-prone.   Using real data whenever possible to get effort and impact numbers is probably the most important one.    Also in this category is “A/B testing”, where you do a controlled experiment comparing multiple methods, on a small scale before making a big investment.   Similarly, he suggests breaking tasks into smaller subtasks as much as possible— the smaller the task, the easier it is to come up with a reasonable estimate.   Finally, factoring in a confidence level to each estimate can also help.     I think one other implied factor that Logan doesn’t state explicitly is that we need to be consistently agile:   as a project starts and new data becomes available, constantly re-examine our efforts and impacts to decide whether some re-planning may be justified.

But the key lesson here is that there are always a lot of subtle details in this type of planning, due to the uncertainties of business and of human psychology.  Always remember that writing  down some numbers and drawing a pretty graph does NOT mean that you have successfully used math to provide indisputable rationality to your business decisions.  You must constantly take ongoing followup actions to make up for the fact that human brains cannot be boiled down to a few numbers so simply.

And this has been your math mutation for today.

References:


Sunday, June 3, 2018

241: A Strange Fixation

Audio Link


Hi everyone.   Before we get started, I want to thank listener ‘katenmkate’ for posting another nice review in Apple Podcasts recently.   Thanks Kate!   By the way, other listeneres:  the podcast world is constantly growing more competitive, so a new positive review is still useful— please think about posting one if you like the podcast.   And the same comment goes for the Math Mutation book, which is available & reviewable on Amazon.   Also remember, if you like Math Mutation enough that you want to donate money, please make a donation to your favorite charity & email me about it, and I’ll mention you on the podcast for that as well.

Now for today’s topic.   Try the following experiment:   Take any 4-digit number and generate 2 values from it by arranging the digits in ascending and then descending order.   By the way, don’t choose a degenerate case where the number has all digits the same, like ‘2222’.   (For today I’ll pronounce 4-digit numbers just by saying the digits, to save time.)   Now subtract the smaller 4-digit number from the larger one.  You should find that you get another 4-digit number (or a 3-digit one, in which case you can add a leading 0.)   Repeat the process for a few steps.   Do you notice anything interesting?

For example, let’s start with 3524.  After we generate numbers by the orderings of the digits, we subtract 5432 - 2345, and get 3087.    Repeating the procedure, we subtract 8730-0378, and get 8352.   One more time, and we do 8532 - 2358, to get 6174.   So far this just seems like a weird procedure for generating arbitrary numbers, maybe a bit of a curiosity but nothing special.   But wait for the next step.   From 6174, we do 7641 - 1467, which gives us… 6174 again!   We have reached an attractor, or fixed point:   repeating our process always brings us back to this same number, 6174.   If you start with any other non-degenerate 4-digit number, you will find that you still get back to this same value, though the number of steps may vary.

This value, 6174, is known as Kaprekar’s Constant,  named after the Indian mathematician who discovered this strange property back in 1949.   Strangely, I was unable to find any mention of an elegant proof of why this number is special; it seems like it’s just one of those arbitrary coincidences of number theory.  It’s easy to prove the correctness of the Constant algebraically, but that really doesn’t provide much fundamental insight.   This constant does have some interesting additional properties though:   in the show notes, you’ll find a link to a fun website by a professor named Girish Arabale.    He uses a computer program to plot how many steps of Kaprekar’s operation it takes to reach the constant, indicating the step count by color, and generates some pretty-looking patterns.   The web page even showcases some music generated by this operation, though I still prefer David Bowie. 

We see more strange properties if we investigate different numbers of digits.   There is a corresponding constant for 3-digit operations, 495, but not with any higher digit counts.   If you try the same operation with 2, 5 or 6 digit numbers, for example, you will eventually settle into a small loop of numbers which generate each other, or a small set of possible self-generating constants, rather than a single arbitrary constant.    Let’s try 2 digits, starting arbitrarily at 90:      90 - 09 = 81.   81 - 18 = 63.  63 - 36 = 27.   72 - 27 = 45.  54 - 45 = 09— and you can see we’re now in a loop.   If we continue the operation, we will repeat  9, 81, 63, 27, 45, 9 forever!   You will see similar results if trying the operation in bases other than 10, where the meaning of the digits is slightly different and a result similar to Kaprekar’s doesn’t appear so easily.

One other aspect of Kaprekar’s Constant that we should point out is that this is really just a specific example of the general phenomenon of “attractors”.   If you take an arbitrary iterative operation, where you repeat some calculation that leads you back into the same space of numbers, it’s not that unusual to find that you eventually settle into a small loop, a small set of constants, or a fixed point like Kaprekar’s Constant.   I think his constant became famous because it’s described with a relatively simple operation, and was discovered relatively early in the history of popularization of the mathematical concept of attractors.    

Thinking about such things has actually become very important in some areas of computer science, such as the generation of pseudo-random numbers.   We often want to use a repeated mathematical procedure to generate numbers that look random to the consumer, but from the same start point, called a “seed”, will always generate the same numbers.   This is important in areas like simulation testing of engineering designs.   In my day job, for example, I might want to see how a proposed new computer chip would react to a set of several million random stimuli, but in case something strange happens, I want to easily reproduce the same set of supposedly “random” input later.     The typical solution is a pseudo-random number generator, where the previous number generates the next one through some function.

Kaprekar’s Constant helps demonstrate that if we choose an arbitrary mathematical procedure to repeatedly generate numbers from other ones, the results might not end up as arbitrary or random-looking as you would expect.   You can create some convoluted repetitive procedure that really has no sensible reason in its definition for creating a predictable pattern, like Kaprekar’s procedure, and suddenly find that it always leads you down an elegant path to a specific constant or a small loop of repeated results.   Thus, the designers of pseudo-random number generators must be very careful that their programs really do create random-seeming results, and are free of the hazards of attractors and fixed points like Kaprekar’s Constant.

And this has been your Math Mutation for today.


References:



Saturday, April 28, 2018

240: R.I.P. Little Twelvetoes

Audio Link

I was sad to hear of the recent passing of legendary jazz artist Bob Dorough, whose voice is probably still echoing in the minds of those of you who were children in the 1970s.    While Dorough composed many excellent jazz tunes— his “Comin’ Home Baby” is still on regular rotation on my iphone— he is most famous for his work on Schoolhouse Rock.   Schoolhouse Rock was a set of catchy music videos shown on TV in the 70s, helping to provide educational content for elementary-school-age children in the U.S.   Apparently three minutes of this educational content per hour would offset the mind-melting effects of the horrible children’s cartoons of the era.    But Dorough’s work on Schoolhouse Rock was truly top-notch:  the music really did help a generation of children to memorize their multiplication tables, as well as learning some basic facts about history, science, and grammar.   However, my favorite of the songs may have been the least effective, with lyrics that were mostly incomprehensible to kids who were not total math geeks.   I’m talking about the song related to the number 12, “Little Twelvetoes.”   I still chuckle every time that one comes up on my iphone.

Now, most of the songs in the Multiplication Rock series tried to relate their numbers to something concrete that the kids could latch on to.    For example, “Three is a Magic Number” talked about a family with a man, woman, & baby;  “The Four-Legged Zoo” talked about the many animals with four legs, and “Figure Eight” talked about ice skating.    But Dorough must have been smoking some of those interesting 1970’s drugs when he got to the number 12.  Instead of choosing something conventional like an egg carton or the grades in school, he envisioned an alien with 12 fingers and toes visiting Earth, and helping humans to do math.   OK, this was a bit odd, but I think it could have been made to work— but he did his best to pile further oddities on top of that, with lyrics especially designed to confuse the average young child.

First, the song spends a lot of time introducing the concept that the alien counts in base 12.   Here’s the actual narration:   

Now if man had been born with 6 fingers on each hand,  he'd also have 12 toes or so the theory goes. Well, with twelve digits, I mean fingers, he probably would have invented two more digits when he Invented his number system. Then, if he saved the zero for the end, he could count and multiply by twelve just as easily as you and I do by ten.
Now if man had been born with 6 fingers on each hand, he'd probably count: one, two, three, four, five, six, seven, eight, nine, dek, el, doh. "Dek" and "el" being two entirely new signs meaning ten and eleven.  Single digits!  And his twelve, "doh", would be written 1-0.
Get it? That'd be swell, for multiplying by 12.

For those of us who were really into this stuff, that concept was pretty cool.   But for the average elementary school student struggling to learn his regular base-10 multiplication tables, introducing another base and number system doesn’t seem like the best teaching technique.   To be fair, Dorough was just following the lead of the ill-fated “New Math” movement of the time, which I have mentioned in earlier episodes of this podcast.   A group of professors decided that kids would learn arithmetic better if teachers concentrated on the theoretical foundations of counting systems, rather than traditional drilling of arithmetic problems.   Thankfully, their mistake was eventually realized, though later educational fads haven’t been much better.

On top of pushing this already-confusing concept, the song introduced those strange new digits “dek and “el”, and a new word “doh” for 12.   While it’s true that we do need more digits when using a base above 10, real-life engineers who use higher bases, most often base-16 in computer-related fields, just use letters for the digits after 9:  A, B, C, etc.    That way we have familiar symbols in an easy-to-remember order.   I guess it’s fun to imagine new alien squiggles for the extra digits instead…  but I think that just makes the song even more confusing to a young child.   And why not just say “twelve” instead of a new word “doh” when we get to the base?  (Note, however, that this song predated Homer Simpson.)

But the part of the song I find truly hilarious is the refrain, which implies that having this alien around would help us to do arithmetic when multiplying by the number 12.   It goes, “If you help me with my twelves, I'll help you with your tens.  And we could all be friends.”   But think about this a minute.   It’s true that the alien who writes in base 12 could easily multiply by 12 by adding a ‘0’ to a number, just like we could do when multiplying by 10.   So suppose you ask Little Twelvetoes to multiply 9 times 12.   He would write down “9 0”.  Exactly how would this be helpful?   You would now have to convert the number written as 90 in base 12 to a base 10 number for you to be able to understand it, an operation at least as difficult as multiplying 9 times 12 would be in the first place!   So although this alien’s faster way of writing down answers to times-12 multiplication would be interesting, it would be of absolutely no help to a human doing math problems.    You could be friends with the alien, but your math results would just confuse each other.

Anyway, I should point out that despite these various absurdities, the part of the song that lays out the times-12 multiplication table is pretty catchy.   So if the kids could get past the rest of the confusing lyrics, it probably did still achieve its goal of helping them to learn multiplication.   And of course, despite the fact that I enjoy making fun of it, I and millions of kids like me did truly love this song— and still remember it over 40 years later.   Besides, this is just one of many brilliantly memorable tunes in the Schoolhouse Rock series.   I think Bob Dorough’s music and lyrics will continue to play in my iPhone rotation for many years to come.

And this has been your Math Mutation for today.

References:







Saturday, March 31, 2018

239: The Shape Of Our Knowledge

Audio Link

Recently I’ve been reading Umberto Eco’s essay collection titled “From the Tree to the Labyrinth”.   In it, he discusses the many attempts over history to cleanly organize and index the body of human knowledge.    We have a natural tendency to try to impose order on the large amount of miscellaneous stuff we know, for easy access and for later reference.   As typical with Eco, the book is equal parts fascinating insight, verbose pretentiousness, and meticulous historical detail.    But I do find it fun to think about the overall shape of human knowledge, and how our visions of it have changed over the years.

It seems like most people organizing a bunch of facts start out by trying to group them into a “tree”.   Mathematically, a tree is basically a structure that starts with a single node, which then links to sub-nodes, each of which links to sub-sub-nodes, and so on.   On paper, it looks more like a pyramid.   But essentially it’s the same concept as folders, subfolders, and sub-sub folders that you’re likely to use on your computer desktop.   For example, you might start with ‘living creatures’,   Under it you draw lines to ‘animals’, ‘plants’, and ‘fungi’.   Under the animals you might have nodes for ‘vertebrates’, ‘invertebrates’, etc.     Actually, living creatures are one of the few cases where nature provides a natural tree, corresponding to evolutionary history:  each species usually has a unique ancestor species that it evolved from, as well as possibly many descendants.

Attempts to create tree-like organizations date back at least as far as Aristotle, who tried to identify a set of rules for properly categorizing knowledge.   Later authors made numerous attempts to fully construct such catalogs.   In later times, Eco points out some truly hilarious (to modern eyes) attempts to create universal knowledge categories, such as Pedro Bermudo's 17th-century attempt to organize knowledge into exactly 44 categories.  While some, such as “elements”, “celestial entities”, and “intellectual entities” seem relatively reasonable to modern eyes, other categories include “jewels”, “army”, and “furnishings”.     Perhaps the inclusion of “furnishings” as a top-level category on par with “Celestial Entities” just shows us how limited human experience and knowledge typically was before modern times.

Of course, the more knowledge you have, the harder it is to cleanly fit into a tree, and the more logical connections you see that cut across the tree structure.   Thus our attempts to categorize knowledge have evolved more into what Eco calls a labyrinth, a huge collection with connections in every direction.  For example, wandering down the tree of species, you need to follow very different paths to reach a tarantula and a corn snake, one being an arachnid and the other a reptile.   Yet if you’re discussing possible caged parent-annoying pets with your 11-year old daughter, those two might actually be closely linked.    So our map of knowledge, or semantic network, would probably merit a dotted line between the two.     Thus, we don’t just traverse directly down the tree, but have many lateral links to follow, so Eco describes our real knowledge as more of a labyrinth.   He seems to prefer the vivid imagery of a medieval scholar wandering through a physical maze, but in a mathematical sense I think he is referring more to what we would call a ‘graph’, a huge collection of nodes with individual connections in arbitrary directions.

On the other hand, this labyrinthine nature of knowledge doesn’t negate the usefulness of tree structures— as humans, we have a natural need to organize into categories and subcategories to make sense of things.   Nowadays, we realize both the ‘tree’ and ‘labryrinth’ views of knowledge on the Internet.   As a tree, the internet consists of pages with subpages, sub-sub-pages, etc.   But a link on any page can lead to an arbitrary other page, not part of its own local hierarchy, whose knowledge is somehow related.   It’s almost too easy these days.   If you’re as old as me, you can probably recall your many hours poring through libraries researching papers back in high school and college.   You probably spent lots of time scanning vaguely related books to try to identify these labyrinth-like connections that were not directly visible through the ‘trees’ of the card catalog or Dewey Decimal system.

Although it’s very easy today to find lots of connections on the Internet, I think we still have a natural human fascination with discovering non-obvious cross connections between nodes of our knowledge trees.   A simple example is our amusement at puns, when we are suddenly surprised by an absurd connection due only to the coincidence of language.    Next time my daughter asks if she can get a tarantula for Christmas, I’ll tell her the restaurant only serves steak and turkey.    More seriously, finding fun and unexpected connections is one reason I enjoy researching this podcast, discussing obscure tangential links to the world of mathematics that are not often displayed in the usual trees of math knowledge.   Maybe that’s one of the reasons you like listening to this podcast, or at least consider it so absurd that it can be fun to mock.

And this has been your math mutation for today.


References:






Sunday, February 18, 2018

238: Programming Your Donkey

Audio Link

You have probably heard some form of the famous philosophical conundrum known as Buridan’s Ass.   While the popular name comes from a 14th century philosopher, it actually goes back as far as Aristotle.   One popular form of the paradox goes like this:   Suppose there is a donkey that wants to eat some food.   There are equally spaced and identical apples visible ahead to its left and right.   Since they are precisely equivalent  in both distance and quality, the donkey has no rational reason to turn towards one and not the other, so it will remain in the middle and starve to death.   

It seems that medieval philosophers spent quite a bit of time debating whether this paradox is evidence of free will.   After all, without the tie-breaking power of a living mind, how could the animal make a decision one way or the other?   Even if the donkey is allowed to make a random choice, the argument goes, it must use its living intuition to decide to make such a choice, since there is no rational way to choose one alternative over the other.  

You can probably think of several flaws in this argument, if you stop and think about it for a while.   Aristotle didn’t really think it posed a real conundrum when he mentioned it— he was making fun of sophist arguments that the Earth must be stationary because it is round and has equal forces operating on it in every direction.   Ironically, the case of balanced forces is one of the rare situations where the donkey analogy might be kind of useful:  in Newtonian physics, it is indeed the case that if forces are equal in every direction an object will stay still.    But medieval philosophers seem to have taken it more seriously, as a dilemma that might force us to accept some form of free will or intuition.  

I think my biggest problem with the whole idea of Buridan’s Ass as a philosophical conundrum is that it rests on a horribly restrictive concept of what is allowed in an algorithm.  By an algorithm, I mean a precise mathematical specification of a procedure to solve a problem.   There seems to be an implicit assumption in the so-called paradox that in any decision algorithm, if multiple choices are judged to be equally valid, the procedure must grind to a halt and wait for some form of biological intelligence to tell it what to do next.   But that’s totally wrong— anyone who has programmed modern computers knows that we have lots of flexibility in what we can specify.   Thus any conclusion about free will or intuition, from this paradox at least, is completely unjustified.   Perhaps philosophers in an age of primitive mathematics, centuries before computers were even conceived, can be forgiven for this oversight.

To make this clearer, let’s imagine that the donkey is robotic, and think about how we might program it.   For example, maybe the donkey is programmed to, whenever two decisions about movement are judged equal, simply choose the one on the right.   Alternatively, randomized algorithms, where an action is taken based on a random number, essentially flipping a virtual coin, are also perfectly fine in modern computing.    So another alternative is just to have the donkey choose a random number to break any ties in its decision process.    The important thing to realize here is that these are both basic, easily specifiable methods fully within the capabilities of any computers created over the past half century, not requiring any sort of free will.  They are fully rational and deterministic algorithms, but are far simpler than any human-like intelligence.   These procedures could certainly have evolved within the minds of any advanced  animal.

Famous computer scientist Leslie Lamport has an interesting take on this paradox, but I think he makes a similar mistake to the medieval philosophers, artificially restricting the possible algorithms allowed in our donkey’s programming.   For this model, assume the apples and donkey are on a number line, with one apple at position 0 and one at position 1, and the donkey in an arbitrary starting position s.   Let’s define a function F that describes the donkey’s position an hour from now, in terms of s.  F(0) is 0, since if he starts right at apple 0, there’s no reason to move.   Similarly, F(1) is 1.  Now, Lamport adds a premise:  the function the donkey uses to decide his final location must be continuous, corresponding to how he thinks naturally evolved algorithms should operate.   It’s well understood that if you have a continuous function where F(0) is 0, and F(1) is 1, then for any value v between them, there must be a point x where F(x) is v.   So, in other words, there must be points v where F(v) is not 0 or 1, indicating a way for the donkey to still be stuck between 0 and 1 and hour from now.      Since the choice of one hour was arbitrary, a similar argument works for any amount of time, and we are guaranteed to be infinitely stuck from certain starting points.   It’s an interesting take, and perhaps I’m not doing Lamport justice, but it seems to me that this is just a consequence of the unfair restriction that the function must be continuous.   I would expect precisely the opposite:   the function should have a discontinuous jump from 0 to 1 at the midpoint, with the value there determined by one of the donkey-programming methods I discussed before.

I did find one article online that described a scenario where this paradox might provide some food for thought though.   Think about a medical doctor, who is attempting to diagnose a patient based on a huge list of weighted factors, and is at a point where two different diseases are equally likely by all possible measurements.   Maybe the patient has virus 1, and maybe he has virus 2— but the medicines that would cure each one are fatal to those without that infection.   How can he make a decision on how to treat the patient?   I don’t think a patient would be too happy with either of the methods we suggested for the robot donkey:  arbitrarily biasing towards one decision, or flipping a coin.     On the other hand, we don’t know what goes on behind the closed doors after doctors leave the examining room to confer.   Based on TV, we might think they are always carrying on office romances, confronting racism, and consulting autistic colleagues, but maybe they are using some of our suggested algorithms as well.     In any case, if we assume the patient is guaranteed to die if untreated, is there really a better option?  In practice, doctors resolve such dilemmas by continually developing more and better tests, so the chance of truly being stuck becomes negligible.   But I’m glad I’m not in that line of work. 



And this has been your math mutation for today.

References:

Monday, January 15, 2018

237: A Skewed Perspective

Audio Link

If you’re a listener of this podcast, you’re probably aware of Einstein’s Theory of Relativity, and its strange consequences for objects traveling close to the speed of light.   In particular, such an object will appear to have its length shortened in the direction of motion, as measured from its rest frame.    It’s not a huge factor— where v is the object’s velocity and c is the speed of light, it’s the square root of 1 minus v squared over c squared.    At ordinary speeds we observe while traveling on Earth, the effect is so close to zero as to be invisible.    But for objects near the speed of light, it can get significant.    

A question we might ask is:  if some object traveling close to the speed of light passed you by, what would it look like?    To make this more concrete, let’s assume you’re standing at the side of the Autobahn with a souped-up camera that can take an instantaneous photo, and a Nissan Cube rushing down the road at .99c, 99% of the speed of light, is approaching from your left.   You take a photo as it passes by.   What would you see in the photo?   Due to length contraction, you might predict a side view of a somewhat shortened Cube.   But surprisingly, that expectation is wrong— what you would actually see is weirder than you think.   The length would be shorter, but the Cube would also appear to have rotated, as if it has started to turn left.

This is actually an optical illusion:   the Cube is still facing forward and traveling in its original direction.   The reason for this skewed appearance is a phenomenon known as Terrelll Rotation.    To understand this, we need to think carefully about the path a beam of light would take from each part of the Cube to the observer.   For example, let’s look at the left rear tail light.    At ordinary non-relativistic speeds, we wouldn’t be able to see this until the car had passed us, since the light would be physically blocked by the car— at such speeds, we can think of the speed of light as effectively infinite. Thus we would capture our usual side view in our photo.   But when the speed gets close to that of light, the time it takes for the light from each part to travel to the observer is significant compared to the speed of the car.  This means that when the car is a bit to your left, the contracted car will have moved just enough out of the way to actually let the light from the left rear tail light reach you.   This will arrive at the same time as light more recently emitted from the right rear tail light, and light from other parts of the back of the car that are in between.   In other words, due to the light coming from different parts of the car having started traveling at different times, you will be able to see an angled view of the entire rear of the car when you take your photo, and the car will appear to have rotated overall.   This is the Terrell Rotation.

I won’t go into the actual equations in this podcast, since they can be a bit hard to follow verbally, but there is a nice derivation & some illustrations linked in the show notes.   But I think the most fun fact about the Terrell Rotation is that physicists totally missed the concept for decades.   For half a century after Einstein published his theory, papers and texts claimed that if you photographed a cube passing by at relativistic speeds, you would simply see a contracted cube.    Nobody had bothered carefully thinking it through, and each author just repeated the examples they were used to.    This included some of the most brilliant physicists in our planet’s history!   There were some lesser-known physicists such as Anton Lampa who had figured it out, but they did not widely publicize their results.   It was not until 1959 that physicists James Terrell and Roger Penrose independently made the detailed calculation, and published widely-read papers on this rotation effect.    This is one of many examples showing the dangers of blindly repeating results from authoritative figures, rather than carefully thinking them through yourself.


And this has been your math mutation for today.


References: