Sunday, October 13, 2013

187: Escher Made Real

Audio Link

If you're a fan of this podcast, I'm betting that at some point in your life you had a picture by M.C. Escher up on your wall, probably before you got married. As you may recall, M. C. Escher was the 20th-century Dutch artist famous for his many woodcuts and lithographs that used geometrical tricks to depict objects that could not possibly exist in real life. While Escher didn't have any formal mathematical training, it is said that he was inspired by a trip to the Alhambra in 1936, and its unique use of many of the 17 possible "wallpaper group" symmetries in its designs. After this, he studied an academic paper by mathematician George Polya on plane symmetries, and started drawing different types of geometric grids to use as the basis of his art. He later studied works of other mathematicians, basing lithographs on mathematical concepts like representations of infinity and hyperbolic plane tilings. As a result, Escher's art has been continually popular among scientists, engineers, and mathematicians.

When you look at one of Escher's impossible illustrations, it's always amusing to think about what it would take for it to exist in real life. But an Israeli professor, Gershon Elber, has gone one step further, and actually created physical models and 3-D printable CAD files, that allow physical creation of impossible Escher objects. How can you create an impossible object, you might ask? The key is the power of optical illusion. Each of these objects only looks correct, in other words matching the original Escher picture, from certain viewing angles-- if you rotate it or look from the wrong direction, you'll see that it is seriously distorted. At this point, you might say that Elber cheated, but give him a break-- we're talking about truly impossible objects here. If you learn how to fully warp spacetime at some point and make them non-impossible, then you can freely scoff at him.

One simple example is what's known as a 'Necker cube'. This simple illusion is based on the pseudo-3-D line drawing of a cube that we all learn in elementary school: basically two squares connected by diagonals on the corners. The impossible part comes when you give the lines some thickness, and draw the cube so that when edges in the drawing meet, they cross each other in contradictory ways, forcing the 'far' corner to be closer than the 'near' corner in some instances. If you haven't seen this before, you can find it at the links in the show notes. In Elber's version, it does indeed look like this miraculous cube has been created in real life-- if viewed from exactly the right angle. If you rotate it a little, you'll see that the whole thing is a mess of curved and distorted pieces, nothing like a real cube.

But Elber doesn't only tackle simple geometrical shapes-- he also recreates complete Escher artworks, such as the "Belvedere", a famous depiction of an impossible two-story tower. The two floors appear to be right on top of each other, a fact required by the placement of several supporting pillars. Yet when you look more closely, the picture also requires the two floors to exist at perpendicular angles; if one is going north-south, the other must be going east-west. Since the floors can't be simulatenously right on top of each other and at perpendicular angles, it's a truly impossible object. Again, you can see the picture in the show notes if you don't recognize it from my description. And once again, what looks like a perfect 3-D construction from a certain viewing angle is a distorted mess of angled and curved support beams from any other angle. For it to look like Escher's real Belvedere, you have to be at the exact spot where the angled beams will look straight to you.

By the way, the urge for a real-life Belvedere is apparently not unique to Elber: I also found a link where another self-described online "professional nerd" named Andrew Lipson constructed a similar model, entirely out of Legos! Lipson's version is actually more complete, even including the minor details and people in Escher's picture. He acknowledges that he needed to cheat at certain points-- for example, quote, "In Escher's original, he's holding an 'impossible cube', but in our version he's holding an impossible LEGO square. Well, OK, not quite impossible if you've got a decent pair of pliers (ouch)."

On Elber's and Lipson's web pages you can find a number of other Escher re-creations. Now while it's pretty fun to say you have created an impossible object in real life, one might argue: if it only looks like the truly impossible object from a certain viewing angle, aren't you cheating? Isn't this roughly equivalent to just creating the 2-D painting anyway? To someone who asks that question, I can only answer that they probably lack the geekiness level required to appreciate the coolness of Elber's and Lipson's work. I suspect most fans of this podcast don't have that problem.

And this has been your math mutation for today.


References:

Monday, May 27, 2013

181: Rediscovering The Basics

Audio Link

In honor of my recent election to the Hillsboro, Oregon School Board, I thought it might be nice to discuss an education-related topic that's been popping up a lot recently. Back in the 90's there was a fad for what became known as "discovery-based" K12 math curricula. This means that rather than teachers directly telling the students how to do standard operations, such as adding n-digit numbers or simplifying equations, the students would be directed in small groups to experiment and discover such ideas themselves, with some guidance from the teacher. These were widely criticized and debunked-- see the "Mathematically Correct" website linked in the show notes-- and largely fell out of favor. But in recent years, connected with the Common Core movement in the U.S., new forms of these type of programs seem to be coming back. It seems like there is an inherent conflict between two approaches to math education: should students be helped to discover the basic principles of math, or should they be presented with and drilled in standard algorithms? There are some basic criticisms of these discovery techniques that have a lot of weight: the readiness of children to derive standard algorithms, and the internalization of math basics.

As a motivating example, I have a link in the show notes to a video of a young girl adding some 3 and 4-digit numbers, using the methods she learned in a "modern" school. She tries two methods. First she draws pictures representing the thousands, hundreds, tens, and ones in each of the numbers, and after a tedious 8 minutes of counting pictures, gets a wrong answer. Then she uses the standard method (which her mom taught her), of writing the numbers on top of each other, adding in columns, and carrying when needed-- and in less than a minute has the correct answer. The whole process of drawing the pictures seems rather absurd, and it's clear that the girl really has no understanding of how the pictures really relate to place-based notation, or of how the standard algorithm is really just an advanced abstraction of her picture-method. Several online commentators pointed out that the picture method was directly analogous to roman numerals, which no sensible person would use today for nontrivial calculations.

The first question to ask is: are children in these classes ready to derive the standard algorithms that took mathematicians thousands of years to create? Going from simple direct methods, like the little girl's pictures which were analogous to roman numerals, to the modern algorithms, requires some clever abstract reasoning. As I discovered when I tried to clarify the higher-dimensional concepts from the Flatland movie to my 6 year old, there are some types of logic and abstraction that a child may not be ready for. Many psychologists talk about Piaget's stages of cognitive development: the "symbolic function" stage at ages 2-4; the "intuitive thought" stage at ages 4-7; the "concrete operational" stage at ages 7-11; and the "formal operational" stage beyond that. You can see more details at the link in the show notes. The exact ages vary from child to child of course, but the key point here is that at the first three stages, a child's ability to develop abstract algorithms from concrete examples is severely limited; and the capacity for true abstract reasoning isn't really developed until high school for a majority of kids. This is probably one reason why the girl in the video had trouble seeing the connection between the picture-based method and the standard method of addition. But at these younger ages, kids are much better at picking up and internalizing rote facts and procedures, which would seem to make it an ideal time to teach them standard algorithms.

We also need to ask whether these new types of lessons, assuming they focus on both the historical derivation and the current best-known-methods, will actually result in students learning the standard algorithms. Here there is reason for concern. Often the modern math lessons have many fewer practice problems in the homework than traditional math, and encourage the use of calculators and computers to do basic calculations. Unfortunately, these are missing a very imporant aspect of all this drilling-- it helps people to really absorb basic mathematical algorithms, and make them instinctive. Back in episode 70, "Number Nonsense", I discussed my frustration with a fast food cashier who could not recognize that 2+2=4 without a calculator. And during my recent school board campaign, I came across a mom who was upset that her child took out a calculator when asked the difference between 6 percent and 600 percent. More important than these anecdotes, though, is that in order to have any hope of success in advanced science and engineering classes, kids really need this basic number sense. Even if the hard problems are ultimately crunched by computer programs, it's simply not possible to have an initial discussion of an engineering problem if every rough estimate needs a pause to get out a calculator. The little girl in the video was lucky that her mom took it in her own hands to work with her at home and make sure she had a good understanding of the standard algorithm.

Our standard methods, like the one that enabled a young girl to add 4-digit numbers in less than a minute, were developed after thousands of years of thought by very smart people. And since then they have been used regularly by a huge population of businessmen, scientists, and engineers, the majority of whom have never given a thought to the details of how these were derived or why they work. A lot of proponents of the new teaching methods criticize the "drill and kill" of traditional math, the many exercises given to practice and memorize standard algorithms, which they consider boring. But the advantages of all this drilling is that using these techniques becomes easy and instinctive: how often do we stop to ask ourselves why we are able to do simple addition problems? Well, maybe Math Mutation listeners do, but most people can add just fine without thinking about the historical development of place-based notation. Being able to do basic math opens the doors to advanced concepts of science, engineering, and computers. And students who have NOT mastered these basic foundations will find these important topics forever beyond their reach.

So, am I advocating that we throw out all these modern and Common Core math programs, and go back strictly to traditional methods? Not necessarily. A properly designed program which teaches and drills the standard algorithms, while also emphasizing problem-solving and using the history of their derivation to provide some motivation and color to the class, might very well be a great success. But in the urge to eliminate the drill-and-kill method and make math more fun, we seem to be constantly rediscovering the lesson that Euclid taught to Ptolemy over 2000 years ago, that there is no "royal road" to mathematics. We need to be very careful of the tendency of education efforts every few decades to come up with silver bullets that will somehow make math easy for everyone-- internalizing the basics algorithms of math usually requires a lot of concentrated thought and hard work. We've been here before, as you might recall: back in episode 145 I discussed the New Math movement of the 1970s, similarly based on professors promising that adding theoretical foundations to elementary mathematics would somehow make it easier to learn, and ending in disaster. Many aspects of math really can be fun, as I hope most of my podcast episodes have shown, but to understand the subject overall you really need a solid mastery of the basics.

By the way-- I know from listener emails that many of you out there are actually math or science teachers. I'd really like to hear from you about experiences, positive or negative, with these new mathematics programs. Please send emails to erik@mathmutation.com. Thanks!

And this has been your math mutation for today.


References:

http://hillsboroerik.com :  My school board / education blog.

http://en.wikipedia.org/wiki/Mathematically_Correct : Mathematically Correct
http://www.youtube.com/watch?v=1YLlX61o8fg : Video of young girl doing addition
http://en.wikipedia.org/wiki/Piaget%27s_theory_of_cognitive_development : Piaget's theory of cognitive development
http://wheresthemath.com/curriculum-reviews/math-textbook-reviews/testimonies-about-programs/ : article on problems with discovery math
http://www.cpm.org/ : Discovery-type program being adopted in my district

Monday, April 29, 2013

180: Reading The Tea Leaves For Real

Audio Link

Before we begin, I'd like to thank listener Bret McDermitt for posting another nice review on iTunes.   I'd also like to thank the Math Insider website for featuring Math Mutation in an article on great math podcasts-- check the article out at the link in the show notes!   Now, let's get on to today's topic.

As I looked out the window recently on a windy day, watching leaves and dirt fly around outside, it brought to mind Rudy Rucker's concept of a 'paracomputer', a computer based on observing natural processes that happen to be equivalent to a natural computation.   Let's review the background of this idea briefly so we can explain it.

You may recall from some earlier podcasts the ideas of Stephen Wolfram's controversial tome "A New Kind of Science", which claims that we can view many natural phenomena as similar to a computation such as a cellular automaton.      One popular example of a cellular automaton is Conway's Game of Life.   In this game, each square in a grid can be "alive" or "dead", usually represented by being colored black or white.  A live square stays alive for the next time step if and only if it has exactly two or three live neighbors, otherwise dying of loneliness or overcrowding.   And a dead square is "born" and becomes alive if it has exactly  three live neighbors.    The amazing thing about this simple game is that depending on the initial set of live squares, it can display a huge range of lifelike behaviors, as you can see illustrated on the Wikipedia page linked in the show notes.    This led researchers such as Wolfram to hypothesize that such simple computations might explain the basics of how our universe really works.

If you look at the patterns created by various Life configurations, you can see three basic  types.  Predictable patterns settle into an endless repetition of simple elements, such as a square group of cells simply living forever in the middle.     Chaotic patterns end up looking totally random, like the static on a malfunctioning TV set.   The most interesting ones are what Rudy Rucker calls "gnarly" patterns, which seem to have interesting structure but are unpredictable.   These gnarly patterns are what seem to have promise to represent real life:  for example, a gnarly pattern might look analogous to clouds moving in the sky, the life cycle of a jellyfish, or celestial bodies interacting over the lifetime of a solar system.   These all share the property that their ultimate results are in theory predictable, but so complex to calculate that there's no fundamentally better way to figure out what will happen than to wait and see.

After observing many of these gnarly systems, Wolfram hypothesized the Principle of Computational Equivalence.   This states that at some fundamental level, all these gnarly real-life systems are each capable of acting as a universal computer, and thus equivalent to each other.   So, for example, by coming up with a suitable "mapping" of clouds that represents the initial state of a jellyfish being born, and somehow manipulating the clouds into this configuration, you would be able to, by watching those clouds, predict exactly what is happening at any stage of the jellyfish's life.     You just need to decode the current state of the clouds into the equivalent state of the jellyfish.

And this is Rudy Rucker's idea of a "paracomputer".    The concept is that you figure out some gnarly physical phenomenon that you can control, identify the proper mapping to and from whatever other type of system you want to analyze, and just set it in motion.   Sounds pretty cool, right?   So, all those old fortune-tellers who claim to be able to tell the future from tea leaves might actually have the core of a feasible idea:  all you have to do is figure out enough detail about your life to map its precise state into the tea leaves in a very large kettle, shake the kettle in a defined way so the leaves will represent each future state of your life, read the results in the leaves, and map the final leaf pattern to the details of your life.   Presto!  You now know every detail of your future.

Now if you're picky, you might have spotted a few flaws in this idea.   Wolfram's Principle itself is very controversial, as it seems to collapse what could potentially be many different complexity classes, and claim that a huge range of phenomena are equally complex to compute.    His observations provide some evidence, but nothing close to a real proof.    There's also the question of how you would fully figure out the starting state of your life, to map to the tea leaves:   you might object as the fortune-teller takes out a bone saw, opens up your skull, and carefully extracts the state of each neuron in your brain.    Even after your brain has been carefully put back together without modifying the state, then there's the question of how many tea leaves it would take to map your life:   the fortune-teller might need all the tea in China to properly represent it, especially if you live the exciting life of a math podcast fan.   And finally, there's the question of how long it would take the tea leaves to reach their representation of any point in your future:  would this really be any faster than just waiting for your future to happen?

But, nevertheless, it is kind of fun to think that nature really consists of computers all around us, just waiting to be properly understood.    Maybe soon you'll be able to turn in your PlayStation 3 and play Call Of Duty with leaves blowing in the wind in your backyard.  

And this has been your Math Mutation for today.

References:
  • http://www.mathsinsider.com/math-podcasts/ : Math Insider article
  • http://www.asimovs.com/_issue_0511/Thougthexperiments.shtml:  Rudy Rucker article describing paracomputers.
  • http://en.wikipedia.org/wiki/A_New_Kind_of_Science : Wolfram's "New Kind of Science" at Wikipedia
  • http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life : Conway's Game of Life at Wikipedia

Sunday, January 6, 2013

176: Perfect Maps

Audio Link

Before we start I'd like to thank listener SkepticHunter for posting another nice review on iTunes. Don't forget to post one too if you also like the podcast!

Recently I installed Google Earth on my iphone, and showed my daughter the cool feature where you can start with a view of the whole planet, and then zoom in to the exact spot where you currently are. Playing with this feature brought to mind an amusing essay I had recently read in Umberto Eco's "How To Travel With A Salmon", called "On The Impossibility of Drawing a Map of the Empire on a Scale of 1 to 1". The essay discusses a project of creating a map that is so detailed, it actually has a 1:1 scale, with each element on the map being the same size as the feature described.

The silly idea of a 1:1 scale map was apparently first proposed by Lewis Caroll in his lesser-known book Sylvie and Bruno, a somewhat enjoyable tale that didn't quite have the narrative flow of Alice in Wonderland, but shared its sense of absurdity. At one point, a character brags about his kingdom's mapmaking skills:


What do you consider the largest map that would be really useful?"
"About six inches to the mile."
"Only six inches!" exclaimed Mein Herr. "We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!"
"Have you used it much?" I enquired.
"It has never been spread out, yet," said Mein Herr: "the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well."
I think that quote pretty well speaks for itself.

Argentine writer Jorge Luis Borges was apparently a fan of Carroll's, and a few decades later in a short-short story commented on the idea himself. In Borges's vision, the 1:1 scale map is actually constructed, but then abandoned as useless, and left in western deserts to be gradually eroded away by the weather. In another of his essays, Borges points out that if England contains a perfect map of England, then that map must contain a depiction of the map itself, since that map is part of England-- and that depiction must include the depiction of the map, etc, out to infinity.
 
Eco's essay goes into much more detail than Carroll or Borges, discussing the major requirements for a 1:1 map to be useful. He insists that the map be able to exist within the country being mapped; I guess this is because it's not useful as a travel reference if the map must be elsewhere. It must be an actual map, rather than a mechanically created replica: for example, it would be cheating to take a plaster cast of the whole country and call it a map. The map also needs to be useful as a tool for referencing other parts of the country: so it can't be the case that for any location X, the depiction of X on the map lies on the portion of the map directly located at X, since then the map would be no more useful than a transparent sheet over the land. He also discusses various physical constraints such as the materials needed, folding techniques, and the effect on the actual country of having a giant map rolled out over it.
 
I also found an online discussion that pointed to several lesser-known references to such 1:1 scale maps. The rock band They Might Be Giants sings about a ship that is a 1:1 scale map of the state of Arkansas. The British comedy series "Blackadder" has an episode where an incompetent general is constructing a tabletop replica of recently conquered territory on a 1:1 scale. And modern comedian Steven Wright has a joke that goes "I have a map of the United States...actual size. It says, 'Scale: 1 mile = 1 mile.' I spent last summer folding it."
 
This whole discussion also seems to relate to the famous philosophical statement by Alfred Korzybski, "The map is not the territory". He was pointing out that if you are referring to a map, alternate view, or even a mental abstraction of something, you shouldn't confuse that with the thing itself. Naturally, this doesn't only refer to literal maps, but to just about anything you might see, interact with, or think about. This leads to another infinite regression problem, as we cannot actually sense physical things directly: we are always reacting to images on our retina, models in our mind, sets of beliefs we have about reality, or similar abstractions. So even when we think we are directly interacting with actual reality, we are actually dealing with somewhat less fidelity than the ideal 1:1 map.
 
But, in any case, what I consider the most ironic thing about all these riffs on Lewis Carroll's original attempt at absurdity is that the whole concept of 1:1 scale maps is no longer so absurd. Modern software tools like Google Earth really do allow us to depict arbitrary scales, even 1:1, by storing the map virtually and letting us just zoom in on the parts we currently need. Technically I don't think we have quite the satellite resolution to consider Google Earth maps to be 1:1 scale, but we're getting pretty close. And I'm sure philosophers will continue to point out that the Google Map is not the territory-- but for practical purposes, it seems close enough to me. I wonder what other ridiculous absurdities from Alice in Wonderland or Sylvie and Bruno will be rendered mundane by future technologies.
 
And this has been your math mutation for today.
References:
 



Sunday, November 25, 2012

174: Too Much Math?

Audio Link

Before we start, I'd like to thank listeners JMS and Daniel54600, who posted nice reviews recently on iTunes. Remember, if you like the podcast, seeing good reviews does help motivate me to record the next episode!

Anyway, on to today's topic. Recently I read online about a research study that was published this past summer in the Proceedings of the US National Academy of Sciences, called "Heavy Use Of Equations Impedes Communications By Biologists", by Tim Fawcett and Andrew Higginson. The authors analyzed a large number of recent biology papers, and tried to relate the number of equations in each paper to the citation count, or number of later papers that referenced it. They concluded that the more equations you have, the fewer people will later read and use your paper: each increase of 1 equation per page caused a 28% penalty in citations. So, does this really mean that scientists are afraid of math?

As you would expect, the Fawcett-Higginson paper led to quite a bit of discussion in the blogosphere. Do scientists really hate equations? One caveat that was often pointed out is the fact that theoretical papers with lots of equations tend to be more specialized, which inherently leads to lower citation rates. You can probably think of many other factors in the audiences and targets of papers which might have led to the reported results. On the other hand, there are also numerous articles supporting the implication that biologists don't like math in their papers, one pointing out that some Ph.D.s in the life sciences never had to take a math class beyond calculus. Going through the equations can be difficult-- one blogger linked in the show notes suggests that reading papers should be accompanied by "Eye tracking and mild electroshock therapy. If scientists skim over pages of equations or stare into space for too long while reading a technical paper, they get a gentle jolt of electricity to bring them back to the important equations at hand." A bit of hyperbole, but a good way to highlight the amount of mental discipline it takes to follow a detailed series of equations in a paper.

Reading about this paper brought to mind memories of my graduate studies in computer science, back in the early 90s. One day I had come to my advisor with a proposal for my dissertation topic, involving efficient checkpointing methods for parallel programming systems. After looking over my proposal, he looked up at me and said, "It's a good topic, but can you work in more equations?" I was a little confused, and asked where he thought I had skipped a needed equation. "Nowhere specific, it's just that more equations are expected." Needless to say, that was one of the more frustrating conversations of my aborted academic career.

I think what most discussions of the paper are missing is to ask the question: what is the role of an equation in a paper? Is it a quasi-mystical invocation, as my advisor seemed to think, adding credibility to the thesis regardless of its content? Is it an ego-boosting method for the author, allowing him to claim a deep understanding that is inaccessible to the casual reader? Obviously, neither of these is a very good answer. My answer would be that in a science or engineering paper equations perform a very important role: they allow you to rigorously prove that some result is a consequence of your assumptions, definitions, and experimental work. Mathematics is what you use to start with precise definitions and assumptions, and show their logical consequences. When you can derive a new equation to describe a concept in a rigorous, universally applicable way, that is one of the most powerful results possible in a piece of scientific work.

But do scientists and engineers usually work by spending their days deriving series of equations? Actually, before there is an equation, there is usually some kind of intuition about how the world works. And it is refinining and clarifying these intuitions and experiments in a precise way that lead to the mathematical results. Often when writing a paper, proud of the difficult and rigorous work it took to derive the equations based on your original theories, it is easy to get caught in the trap of wanting to put those equations up front as the primary focus of your communication. It's not the fault of the authors that they make this mistake, as it has been programmed in us starting as early as high school geometry. Were you in a class where you were presented a series of proofs that seemingly sprang in ordered, fully understood form from Euclid's mind? I would bet that each of his results certainly came from many hours of doodling and experimenting with different pictures and measurements. How many triangles do you think the ancient Greeks drew before they were ready to prove the Pythagorean Theorem?

When you're writing a paper in any scientific or engineering discipline, you need to keep in mind that your primary job is to communicate with your audience-- to enable the reader to understand the ideas in your paper. If you have performed some insightful math to verify the surprising and more general consequences of your initial intuitions, it is important to tell your readers about that, BUT also important to help them understand your original intutions that led them to the equations. The first reading of a paper by an individual reader will nearly always be a casual attempt to get the basic idea; only after they have been convinced intuitively of the value of the new contribution will they take the time and mental energy to follow all the details. I've sat through way too many lectures at conferences that presented a dense series of derivations and equations one by one, rather than describing at a high level what they are talking about. The results were probably really good, but without any intuition to grab on to, it's nearly impossible to stare at a long series of equations on the screen or in a paper without zoning out.

So I wouldn't take the Fawcett-Higginson result as a statement against equations-- but as a call for theoreticians to keep their audience in mind when trying to describe their results. If describing the intuition first and then putting the detailed equations in an appendix makes the paper more understandable, they should not be afraid to do it. It doesn't detract from the math to provide the audience with an intution about what inspired it, and often can make it much more likely for the work to ultimately be understood and built upon. And isn't that the real point of a scientific or engineering paper?

And this has been your math mutation for today.

References:

 

Sunday, October 28, 2012

173: Digital Roots

Audio Link

Back in episode 128, you may have heard me rightly make fun of New Age numerologists for their propensity to draw great significance into adding up the digits of a number, ignoring the various deep and interesting aspects of numerical structure. I still reserve the right to make fun of them for this, as it's generally pretty silly to prognosticate the future that way. But, being a thoughtful Number 7 personality, it occurred to me that it might be worth talking about the legitimate uses for summing the digits of a number. While it doesn't have quite the universal significance taught by the numerologists, adding the digits of a number is a legitimate mathematical operation, known as the “Digital Root”, and does have real applications.

To start with, let's take a closer look at what you get when you add a number's digits, repeating the process if the result is multi-digit, until you have digested the original number down to a 1-digit value. At first you might think the result is totally arbitrary, but as with many things in math, if we look a little closer we can find a surprising pattern. Try adding the digits of a few examples, and you may start to notice that the digital root is congruent modulo 9 to the original number. In other words, if you take the number you started with, divide it by 9, and look at the remainder, you'll get the same value as you did by this digit-summing process. (Note that for this purpose, as is standard in modular arithmetic and bargain sales prices, we are treating 9 as equivalent to 0.) For example, look at 56. Since 5+6=11. and 1+1=2, its digital root is 2. And it's also 2 more than the closest multiple of 9, 54. How did that happen?
 
It's actually pretty easy to prove that this is always the case. Suppose we have a 4 digit number abcd. Remember that in our place-based number system, the value of this number is equivalent to 1000a + 100b + 10c + d. Let's rearrange this a bit, and write it as 999a + 99b + 9c + a +b+c+d. When you write it this way, you can see that the original number is equal to some multiple of 9 plus the sum of the digits. If the sum of the digits is itself multi-digit, you can repeat the process for another iteration or two. In any case, you can see that our result is proven: this sum really is congruent mod 9 to the starting number.
 
If you're lucky enough to have gone to elementary school before the age of calculator addiction, you may vaguely recall one powerful use for this property of digital roots: the old trick of “casting out 9s”, to check if the result of an arithmetic operation you did by hand is accurate. This takes advantage of the fact that the mod 9 value is preserved by addition, subtraction, multiplication, and division: so if P is congruent to 3, and Q is congruent to 2, then P+Q is congruent to 5. So if you find the digital root of P and add it to the digital root of Q, the sum should match the digital root of your result for P+Q-- if it didn't, you made a mistake somewhere. Be careful though-- about 1/9 of your mistakes will probably result in an error that is congruent mod 9, and will be missed by this method. The term “casting out 9s” comes from the fact that you can take a shortcut and cross out any sets of digits that sum to 9 before adding up the digital roots, without changing the results.
 
There are also many cases where certain types of numbers leave “tracks” in the digital roots that can be used to quickly eliminate candidates that might or might not be numbers of a certain type. As the simplest example, numbers divisible by 9 are identifiable by their digital root of 9. Related to this is that numbers divisible by 3 always have digital roots of 3, 6, or 9. All squares have a digital root of 1, 4, 7, or 9, and cubes have a digital root of 1, 8, or 9. On the wikipedia page you can see the patterns for other cases such as perfect numbers, prime numbers, twin primes, factorials, and more.
 
A cooler application of digital roots is the figure known as the Vedic Square, an ancient Indian variant of the traditional 9x9 multiplication table, with the digital roots in each square instead of the product. So for example, in row 7 and column 8, instead of displaying the product 56, we would write 2, since 5+6 = 11 and 1+1 = 2. I know, you're probably thinking to yourself, “Since when are multiplication tables cool?” But try drawing one, and then coloring in all the squares with a particular set of numbers-- color in all the 1s, or all the 2s, or all the 4s and 5s, etc. You will find that you generate intricate symmetrical patterns, which you can see at some links in the show notes if you're too lazy to start sketching out squares yourself. Such patterns have been observed since ancient times and have been used as the foundations of various forms of abstract Islamic art. I also found one web page claiming the Sistine Chapel, the I Ching, and the game of chess were all influenced by Vedic Square patterns, but these sound like a bit of a stretch.
 
We should also keep in mind that the concept of a digital root is closely tied with the fact that our number system is Base 10. If, for example, we decided to use a Base 12 number system, then the digital roots of any number would be congruent modulo 11 rather than modulo 9 to it. The process of casting out 9s would be replaced by casting out 11s. And we would be able to generate an 11x11 Vedic square that would be totally different from our base-10 9x9 one. Next time you're in the mood to doodle some art & feeling uninspired, just choose an abitrary base and draw a Vedic square of your own, and you'll generate a new fresh set of visually interesting patterns.
 
And this has been your math mutation for today.
References: 
 

Sunday, August 5, 2012

169: Isaac Newton, Supercop

Audio Link

Before we start, I'd like to thank listener DeeTeeEnn, who posted another nice review on iTunes. Remember, you can have your name immortalized in podcast form as well, if you follow Dee's lead & post a review of your own. Anyway, on to our main topic.

You may have been amused or disgusted by the recent release of a cinematic bomb known as "Abraham Lincoln, Vampire Hunter". Kind of ridiculous to take an accomplished historical figure and use him as a prop in an adventure totally unrelated to his accomplishments, isn't it? But perhaps the silliest thing about this idea is that there are plenty of real-life unexpected juxtapositions that could have been mined to create much better, and less repulsive, movies. For example, suppose you saw a marquee with the title "Isaac Newton, Supercop." What would you think? Believe it or not, such a movie might very well be a historically accurate documentary. Isaac Newton, the founder of modern physics and co-inventor of calculus, was also a pioneer in many aspects of modern law enforcement, and an amazingly successful detective.

Newton's unexpected detective career began when he was appointed Warden of the Royal Mint in 1696. If you're like me, you may have read about this in the last line of some biographical article on Newton, and assumed this was a symbolic office or some kind of semi-retirement. In the past many in this post had been influential nobles who got there through connections, and didn't care about the job-- but Newton was meticulous in everything he did, and the nation was in a monetary crisis. One estimate was that 20% of the coinage was counterfeit, and low confidence in the value of money caused a resurgence of medieval-style barter. In order to restore trust in the monetary system, a great recoining was planned, where new coins would be issued that had guaranteed value and authenticity. The first thing Newton did was what you might have guessed: he carefully managed the manufacturing processes involved in recoinage, performing some of the first time-motion studies and making the mint more efficient than it had ever been, increasing its output by a factor of 10.

But taking a wider view of the problem, Netwon realized that the new coins would not solve the nation's monetary problems on its own: after they were issued, the state needed to defend their integrity. Reading the details of his job description, he discovered that he was the primary official entrusted with catching counterfeiters. At first he tried to get out of this duty, but once he realized he was stuck with it, he dove into it with the same skill he devoted to his other pursuits. Looking around and studying the situation, he figured out a few things. Creating counterfeit money was a crime that always involved multiple people: it required a location to produce the coins, accomplices to acquire the raw materials and put the coins into circulation afterwards, and complicit neighbors who would not question the noise and smoke. But anyone who you might catch red-handed with a counterfeit coin in the street would be at best a small player: getting to the source of the problem required human intelligence, direct testimony from people involved in coining conspiracies or close to them. So he recruited a network of undercover agents and informers who would hang out among the seedy locales where these criminals might be found, bringing him information about counterfeiting plots as they were being hatched. In some cases Newton himself even frequented such locations to get a feel for the atmosphere in which such conspiracies occurred. Learning from his informers, he carefully waited until he had enough independent evidence, and only then took the criminals into custody. He carefully interrogated them with methods that would generally not be too far out of place in an episode of Law and Order: after questioning them in detail about their crimes and their accomplices, he would record the results in writing and ask the subject to sign and verify that the recorded information was accurate. He conducted over 200 cross-examinations, and managed to convict 28 counterfeiters.

His most famous case was the one of William Chaloner, a sometimes successful counterfeiter who was a little too audacious for his own good. Chaloner had made a profitable career both of creating fake coinage, and of tricking others into counterfeiting plots so he could turn them in to the government and earn a reward. This techique of playing both sides of the fence worked for him for a number of years, his so-called "service to the Crown" saving him on the rare occasions when he was caught commiting crimes. When the recoinage was announced in 1696, he began scheming to use it to his advantage. He made some personal connections and managed to make a presentation to Parliament on the various ways of manufacturing false coins, which he knew well from experience, attempting to make the case that his extensive knowledge made him the perfect candidate to supervise coin production, superior to the current Warden of the Mint. If it had worked, Chaloner would have been able to counterfeit coins right at the source, from within the mint-- no doubt leading to a fortune in illicit profits. This was a massive strategic blunder on his part, though, since it brought him to Newton's attention, and Newton then took a closer look at Chaloner's colorful career. Using his skills in human intelligence, he soon connected the various dots of Chaloner's life, confirming his supicion that the supposed public servant was actually a professional criminal. Soon he was able to neutralize Chaloner's deceptive claims and convict him of his crimes.

While we might argue that these law enforcement activites are pretty trivial in comparison to his mathematical and physical contributions, it would be foolish to underestimate the importance of England's economy and monetary system at this critical time in history. You can learn a lot more about Newton's career at the Mint and his battle with Chaloner in the book "Newton and the Counterfeiter" by Thomas Levenson, which is where I first learned of this story. So next time you take some time from your day to ponder Isaac Newton's accomplishments, which you should really be doing pretty often given all that he did accomplish, don't forget that on top of the physics and the math, you should also think about the law enforcement. And next time you are having lunch with your favorite Hollywod producer, be sure to pitch Isaac Newton as an ideal subject for the next historical action thriller.

And this has been your math mutation for today.


References: