Saturday, November 30, 2019

256: Crazy Eyed Fish

Audio Link

In the last episode I talked a bit about Richard Dawkins’s book “Climbing Mount Improbable”, explaining how arguments about the mathematical impossibility of evolution tend to overlook some details of how it actually works.   Today we’re going to talk about another interesting mathematical aspect of evolution that Dawkins discusses in that book, the rise of symmetry.  We all know that most modern animals seem to mostly exhibit symmetry along at least one axis.  Some have argued that this is evidence of intelligent planning, similar to a computer-based CAD design done with attention to appearance, beauty, or other subjective standards.   But actually, if you think about how evolution works, there are several natural selection pressures which would inherently push multicellular creatures towards symmetry as they evolve.

One of the most basic observations is that if you’re a creature that simply floats in the water, is anchored to a point on the ground, or doesn’t move much, you have a natural notion of “up” and “down”, but most other directions are equivalent.   Up is the place where sunlight comes from, while Down is the direction where gravity pulls you.   Aside from this distinction, you don’t want to favor particular other directions, since you don’t know where your food will happen to come from.   So if you evolve some useful new feature, like a starfish’s arms, the most efficient design will equally space them around you, enabling their use in all directions.   A natural form of mutation is to duplicate a body part.   These simple observations result in the radial symmetry we see in creatures like jellyfish and starfish.

In the case of creatures that intentionally move for a while in a consistent direction, new factors come into play.   You still have the natural concepts of up and down, since the sun and gravity are almost always going to be critical factors.   But now whenever you are moving, you also have a definite direction of motion, usually towards your food.   Thus you want to have a mouth at one end, and your waste disposal method at the other, so you leave your waste behind rather than wasting energy uselessly re-eating it.    But the directions perpendicular to your motion, the left and right, are still essentially equivalent, so biasing your body in favor of one side would be disadvantageous, or even lead to pulling you around in circles if it affects your method of movement.    

I think the most interesting examples of evolution are the rare exceptions to the above rules, where a formerly symmetric animal finds benefits in breaking that symmetry.   For example, flatfish like flounders and sole look pretty bizarre to us, as they lie on the bottom of the water with both eyes facing upward, one in the “proper” location and the other oddly misplaced on the fish’s face.   They evolved from typical symmetric, nice-looking fish that swam vertically in the water, and in fact are born in a similar form.   But when some of them developed a behavioral mutation and discovered they could feed efficiently by laying on the bottom, selection pressure gradually created mechanisms that moved the now useless bottom eye around to the top as the fish grows.   The resulting arrangement is bizarre-looking due to its asymmetry, but of course quite functional for the fish.   And remembering our discussion in the last episode, how evolution must proceed from very small, not-too-improbable steps:  a series of mutations that gradually move the existing eye to an awkward but functional position is much more likely than a lucky, almost impossible mutation that would rearrange the eyes to symmetrical positions on top of the fish.      

And of course, we can’t forget the many cases where a useful body feature doesn’t directly affect a creature’s interaction with the outside world, so can evolve asymmetrically with no problem.   How many medical comedies have you seen on TV with a cliche situation where someone starts to cut on the wrong side to remove someone’s appendix?   We actually have many internal organs in asymmetric positions— in fact, to a designer who valued the beauty of symmetry, human (and most animal) internal organ arrangements are a total mess.    Maybe in a few years we’ll understand DNA well enough to redesign ourselves in a nice, truly symmetric, pattern.

And this has been your math mutation for today.


Sunday, October 6, 2019

255: Mountain Climbing With Dawkins

Audio Link

Back in episode 118, we discussed some of the flaws in probabilistic arguments against evolution:  the claims that advanced forms of life are so improbable that the math simply disallows the possibility that it could have evolved.    And we do have to admit, looking at some basic components of the human body, like our rather complex eye, it does seem unlikely that such a feature could just show up by random chance.   These kind of arguments continue to show up again and again in popular literature, but they demonstrate a basic misunderstanding of some of the key concepts of evolution and of probability.   Recently I’ve been reading another great book by Richard Dawkins, “Climbing Mount Improbable”, that addresses this argument in more detail, so I thought it might be worth revisiting these ideas in this podcast.  

Dawkins introduces a central metaphor, Mount Improbable, a huge mountain that represents the pinnacle (as we see it) of human evolution.    Suppose you are approaching the mountain, and in front of you see a sheer cliff, with the peak a mile above.    You might wonder:  how could anyone ever get to the top of this mountain?   It’s likely you would judge the task impossible, and turn around and go home rather than attempting it.   

But suppose the opposite site of the mountain looks completely different.   There is a gentle slope, leading upwards from the seashore, where you travel hundreds of miles horizontally to get to that one mile elevation.   While it would take a lot of time, gradually walking up such a slope is not impossible at all:  at any given moment, you are comfortably rising a barely perceptible amount, and getting closer and closer to the peak.   Eventually you will reach it, having taken a lot of time but not encountered any other difficulty.    From the cliff, you might stare down at the foolish tourists below as they gape at your amazing mountain climbing feat.

Essentially, evolution is like this long journey up the mountain.   Evolution’s critics are right that a mutation that suddenly creates a major new feature in an animal is nearly impossible— but biologists don’t claim that that’s what happens anyway.   The classic complaint is that a sudden mutation creating a humanlike eye in an eyeless creature would be equivalent to a tornado blowing through a junkyard and suddenly creating a Boeing 747 airplane.    However, the role of random mutation in the evolutionary process is that of creating tiny incremental improvements:  whenever a small genetic change makes a creature more likely to reproduce, that change will gradually become more and more prevalent in its species’ gene pool.   The gradual, incremental effect of all these changes, each one of which is a tiny improvement on the previous creature, is what causes evolution.   The sum total of such gradual improvements leads eventually to what appear to be extremely complex features.

Now, you might say that this all sounds great, but doesn’t explain the “irreducibly complex” features of the human body, like the eye.   Half an eye, or a small piece of an eye, is certainly useless, so how can an eye gradually develop?    How can it be possible that a series of incremental improvements leads from no eye to a human eye?   Dawkins actually goes into detail on this particular example, since it comes up so often.   The basic point is that there is indeed a set of gradual stages that can be observed in the development of the eye, if you look closely at a wide variety of primitive creatures.   The idea that the eye is irreducibly complex is simply an assumption, a knee-jerk reaction that overlooks the details discovered by centuries of scientific study.

The first stage in eye development is light-sensitive spots, a relatively simple feature to randomly appear.   Since many primitive forms of life get their energy from sunlight, this is pretty basic— some form of interaction with light is almost inherent to the concept of life on Earth.   Once you have a light-sensitive spot, you can detect a looming predator nearby…  but you can do it even better if that spot is slightly indented, so you can get a basic idea of the direction light is reflected from, by seeing which of your light-sensitive cells in the indented spot get activated.    Developing such an indentation in a spot on the body is another relatively simple mutation to arise randomly.

Once you have an indented light-sensitive spot, there are a few simple types of mutations that would improve it.   Increasing the number of individual light-sensitive cells will increase your accuracy, so the natural selection will move in this direction.  Also, deepening the indentation, and partially closing the top like a pinhole camera, are also simple improvements that can increase visual precision.    And once you have such an indented region full of light-sensitive spots, some kind of fluid over them would both help focus light and provide some degree of protection— so mutations that create such fluid, or a membrane covering that eventually fills with fluid, would be very useful.   And so on.   I won’t go into all the details here, but you can find a lot more information in the book.   The key point is that there really is a gradual, slow path up Mount Improbable leading from simple light-sensitive spots to a modern eye.

So, if you’re interested in biology and mathematics, but left uneasy by the implication in many poorly-written textbooks that impossibly unlikely random genetic mutations are required for advanced life to evolve, I would highly recommend Dawkins’ book.    The concept that evolution claims a tornado through a graveyard created a 747 is a fundamental misunderstanding of how probability fits into the theory.   Dawkins makes a very strong case that the impossibly looming probabilistic cliffs of evolution have gentle, highly probable slopes lurking just out of sight on the other side.

And this has been your Math Mutation for today.


Sunday, August 18, 2019

254: How To Annoy Your Cat

Audio Link

If you have a pet cat like I do, you’ve probably noticed their impressive agility:  they can jump or fall from pretty high up, and always seem to land on their feet.   This isn’t just an urban legend— it has been noticed for centuries.   Famous 19th-century scientists including George Gabriel Stokes and James Clerk Maxwell took time off from their revolutionary developments in physics and calculus to ponder the mathematics of falling cats.   For someone with a basic understanding of modern physics, cats’ ability to rotate in midair and always land on their feet might seem a bit miraculous:  due to conservation of angular momentum, you might at first think they shouldn’t be able to do this without some starting rotation being provided when they fall.   Actually, that concern is a bit of an oversimplification, since a cat is not a rigid body.    But still, before modern times, the most common thought was that cats are “cheating”— a falling cat simply pushes off against its starting point, or against the arms of whoever is dropping it, to create the proper rotation.

However, in the 1890s, French scientists Etienne-Jules Marey developed the technique of “chronophotagraphy”, where a series of photos could be taken at very high speed, exposing details of movements that were previously too rapid for human observation.   By using this method to watch falling cats, he was able to put to rest the notion that they were using their starting point as a fulcrum.    Angular momentum is the key— but rather than thinking of the cat as a single rigid body with a global momentum, it’s best to look at the front half and the rear half separately.    As later researchers pointed out, the best way to think about a falling cat is to model it as a pair of loosely connected cylinders, with one representing the cat’s front half, and the other representing its rear half.

So, how does the cat do it?   The key actually is that same angular momentum that was originally confusing the problem, but applied a bit more carefully.   You should think about how a spinning dancer or figure skater spreads their arms out to slow down, or pulls them in to speed up.   This is because the total angular momentum of a spinning body is approximately constant.   (Of course it’s only truly constant if the system is closed, and the spin will inevitably slow due to friction, etc, but let’s ignore that for now.)    Masses that are distant from the center of a spinning object have greater momentum, so if those masses are pulled inward, for the total momentum to be constant, the body must spin faster.   And likewise if those masses are pushed outward, the spinning must slow down.

Thus, the cat can use differing angular momentum of its front and rear halves to its advantage.   First it pulls in its front paws while spreading its rear paws, causing the front part to rotate faster than the rear.   Then it does the reverse to reorient its rear half, until both its front and rear are in the right position.   Cats may also use some other related techniques, exploiting their non-rigidness in various ways; some pretty complex physics papers have been published on the topic, which you can find linked at the articles in the show notes.   Apparently this work has ended up having some very useful applications, such as finding ways for astronauts to better control their motion in space while using less energy.

The most comprehensive experimental study on this topic is probably the 1998 paper from the “Annals of Improbable Research”, by an Italian scientist named Fiorella Gambali.   She claims to have dropped her cat, named Esther, 600 times from various heights from 1 to 6 feet, checking to see if it really did land on its feet.   If you’re ever traveling through Milan and see a woman covered with scratch and bite marks, that’s probably Dr Gambali.   Anyway, she concluded that a cat can stabilize itself during any drop from 2 feet or more, but can’t quite do it for a 1 foot drop.   That makes sense, since the two-phase method we described, of stabilizing the front and then the rear, probably takes a little time to execute.   Perhaps some more experimentation is needed, but I think I’ll just take her word for it rather than trying this on my cat.

And this has been your math mutation for today.


Sunday, July 21, 2019

253: Making Jewelry With Fermat

Audio Link

Whenever you hear the name of the 17th century French mathematician Pierre de Fermat, the first thing that comes to your mind is probably Fermat’s Last Theorem.   That, as I’m sure you remember, is his well-known unproven hypothesis about an + bn = cn not having whole number solutions for n>2.   That one is rightfully famous for having stumped the math community for hundreds of years, until Princeton professor Andrew Wiles finally proved it in the 1990s.   But Fermat was no one-theorem wonder.    He was quite an accomplished mathematician, and came up with many other interesting results, most of which are proven a lot more easily than his Last Theorem.   In particular, one of his best-known other ideas, “Fermat’s Little Theorem”, is arguably a much better legacy.   Aside from having been proven by Leibniz and Euler within a century of its proposal, its applicability in forming primality tests has made it a foundation of some of the algorithms used in modern cryptography.   These algorithms form the basis of systems like the RSA encryption used in many secure internet protocols.

So, what is Fermat’s Little Theorem?   Like his Last Theorem, it’s actually pretty easy to state:   For any prime number p and integer n, np - n is always divisible by p.   So, for example, let’s choose p = 3 and n = 5.   The theorem states that 53 - 5 will be divisible by 3.   Assuming you haven’t forgotten all your elementary arithmetic, you probably see that since 53 - 5 is 120, which is 3 times 40, the theorem works.   Since the theorem is true for all primes, it implies a basic test you can use to prove a number is composite, or non-prime:   if you are examining a potentially prime number p, pick another “witness” number n and check whether the condition of the Little Theorem holds true.   If it doesn’t, we know p isn’t prime.   Of course, this is a probabilistic test— you can get lucky with a nonprime number, like 341 which passes the test (with n = 2) but is equal to 31 times 11.   So you need to calculate the odds and choose samples accordingly.   Modern algorithms use more sophisticated techniques, but still build on this foundation. 

The reason Fermat’s Little Theorem came to mind recently was that I was browsing the web and saw a very nice common-sense proof of the theorem at the “Art of Problem Solving” site.   Like most math majors, I did learn about some proofs of this theorem in college classes, and it’s not too hard to jot down the proper equations, simplify, and prove it inductively using the algebra.   But I always prefer intuitive non-algebraic proofs where I can find them:  though there’s no logical excuse to doubt the bulletproof algebra, my primitive human brain still feels more satisfied with a proof we can visualize.    So, how can we prove that for any prime number p and integer n, np - n is always divisible by p, without using algebra?

Let’s visualize a jewelry-making class, where you are given a necklace that can fit up to p beads, each of which can come in n colors.   Actually, to help visualize, let’s use concrete numbers initially:  your necklace can fit 3 beads, each of which can come in up to 5 colors.     There are thus a total of 53 permutations of beads you can put together on a necklace:  5 choices for the first bead, times 5 for the second, times 5 for the third.     5 of these 53 combinations are necklaces where all the beads are the same color, so let’s put those aside for now.   If you look at any example configuration from the remaining 53 - 5 available, you will see that it actually must be one representative of a family of 3 necklaces.  For any necklace where all 3 beads are not identical, you can rotate it 3 ways, with each rotation being a valid combination.   Since every possible combination must be part of such of a family of 3, and there are no combinations from 2 families that can be equivalent, the number of remaining combinations must be divisible by 3.   Thus,  53 - 5 is divisible by 3, and the theorem holds.   Since this argument works for any prime p and integer n, it effectively proves the theorem.

Anyway, if you’re not already convinced, hopefully after doodling a few necklaces on a scratchpad, or taking your daughter to a jewelry-making class, you can see why the theorem holds.   I always like it when I see a simple intuitive argument like this for what seems at first to require messy algebra— even though we can’t know Fermat’s thought processes for sure, I wouldn’t be surprised if he did this kind of visualization when first coming up with the concept.

And this has been your math mutation for today.


Sunday, June 23, 2019

252: A Mathematician Translating Pushkin?

I just finished reading an English translation of Alexander Pushkin’s famous 19th-century book-length Russian epic poem, Eugene Onegin, a classic tale of tragedy and lost love.    This might seem like an odd topic for a math podcast, except for the name of the translator:  Douglas Hofstadter.   If you’re a fellow math geek, you may recognize him as the author of “Godel Esher Bach”, “Metamagical Themas”, and several other of the greatest modern books in the popular math genre.    Now it’s not totally unprecedented for him to translate a poem, as you may recall that in episode 125, we discussed his book “Le Ton Beau de Marot”, which discussed many mathematical aspects and artistic choices made in translating a short French poem.   Still, taking on the translation of such a famous book-length poem is a major undertaking.   What is it about Eugene Onegin that would attract a mathematician?

Aside from the general qualities of the poem and the classic story, one of the major factors that attracted Hofstadter was the several levels of intricate patterns embedded in Eugene Onegin.  Pushkin created a very original rhyme scheme, with the poem divided into 14-line stanzas of the form ABAB CCDD EFFE GG.   If you look at those first three sets of four lines, they might look a bit familiar from other discussions we’ve had in the podcast:   they are the three possible patterns you can get by flipping four coins, and resulting in two of each side:  ABAB, CCDD, or EFFE.   It seems odd at first that you can’t put your four coins together without getting something that looks like a pattern— no matter how you arrange them, they won’t look random!   Perhaps this is part of what attracted Pushkin to the scheme.

But there is yet another layer of patterns imposed on top of this:  what is called “feminine” and “masculine” rhymes.   A masculine rhyme is a single stressed syllable at the end of a line, like “turn” and “burn”.  In a feminine rhyme, there are two syllables involved, with the stress being on the first:  “turning” and “burning”.   The unstressed syllables may rhyme or be identical.   The pattern of masculine and feminine lines in each stanza is FMFM FFMM FMMFMM.

And on top of this, the poem is in iambic tetrameter, with the stress always falling on even numbered syllables, and exactly 8 or 9 syllables per line:  8 in the lines with masculine rhymes, or 9 in the lines with feminine ones.

With all these restrictions, you can see why it’s quite a mathematical puzzle to grab a set of words from your vocabulary and put together anything like a coherent Pushkin stanza.   On top of that, imagine having to translate another language, and try to come up with something roughly equivalent in English that fits all these patterns!   Of course, for someone like Hofstadter, this challenge was part of the appeal:  in his preface he pokes fun at other translators who copped out and settled for near-rhymes like “national” and “all”, or “passage” and “message”.   The price he pays for being able to match the rhyme scheme exactly, of course, is that he often needs to paraphrase, rather than exactly communicating the corresponding English word for every Russian one.   

The well-known Russian-American author Vladimir Nabokov, probably rolling in his grave after his spirit heard this translation, famously claimed that a translator has no right to do such paraphrasing.   He insisted that one must strictly translate word-for-word, with no regard for rhyme schemes or other aspects of poetry.   But I think that makes the translation a bit boring.   For example, would you rather listen to this translation by Nabokov:

  Hm, Hm, great reader,
  is your entire kin well?
Allow me, you might want perhaps
to learn now from me
what “kinsfolks” means exactly?
Well, here’s what kinsfolks are:

or this version from Hofstadter:

Hullo, hulloo, my gentle reader!
And how’re your kinsfolk, old and young?
Pray let me tell you, as your leader,
Some scuttlebutt about our tongue.
What’s “kin”?  It’s relatively subtle,
But you’ll tune in if I but scuttle.

I think Hofstadter’s version is much more pleasant to read.   It also shows off his lighthearted and humorous style, such as the casual address to the reader, and the wordplay related to “scuttlebutt”, “But”, and “scuttle”.   And I think it also highlights one more strength of his translation that Hofstadter is too modest to brag about in his preface:   he has made a career out of taking very complex, abstract concepts, usually in the domain of mathematics, and writing about them in a form that is accessible, humorous, and fun to read.    Thus, it isn’t very surprising that these skills can also serve him well when translating classic 19th-century Russian literature.   If you have any interest in such topics, I highly recommend this translation.

And this has been your math mutation for today.


Sunday, May 19, 2019

251: Paradoxes, Mathematical Oddities, and Formal Verification

For this episode, we’re doing something a little different.   I recently gave a talk at a small conference relating various math mutation topics to Formal Verification, the engineering discipline where we try to verify correctness of microprocessor designs.    A few parts might go over your head if you’re not an engineer, but most of the discussion relates to the math topics, so I think Math Mutation listeners will enjoy it.   Here you go:

(listen to audio link above, or see video at .)

I hope you found that interesting!   If you want to see a PDF of the slides with the illustrations, you can grab it at this link.

And this has been your math mutation for today.

Sunday, March 31, 2019

250: Life on the Long Tail

Audio Link

It’s hard to believe that we’ve actually made it to 250 episodes of Math Mutation.    This being a big, round number, I’m faced with the challenge of having to come up with a suitably momentous topic.    Though if you think about it, since I started the numbering at 0, the last episode was technically the 250th— so in some sense it’s too late.   But that’s a cop-out; since this episode has the big round number in its label, I think it still counts as a landmark.

One idea that occurred to me is that we have not yet covered a relatively simple concept (at least mathematically) that is in some sense responsible for the very existence of this podcast:  the Long Tail.    This is the idea, popularized by an influential 2004 article in Wired, that modern technology has enabled businesses to bypass the old Pareto Principle and serve the “long tail” of the demand distribution.    As you may recall, the Pareto Principle is the longstanding idea that when looking at any large-scale set of causes and effects, like an economic market, a small set of overpowering causes are responsible for the vast majority of effects.    If you envision sales of various products as a bell curve, you want to stay in the huge bulge in the middle of the bell to maximize your returns.   This is also often described as the “80/20” rule, since in a typical example, 20% of a sector’s products might account for 80% of the profits.   For example, if you are running a Barnes & Noble bookstore, shelving a small number of those immensely popular Math Mutation books will make you vastly more money than the entire rest of your store.    More seriously, you probably need to substitute Harry Potter for Math Mutation in that last sentence, at least until our society has attained a higher stage of evolution,  but it’s otherwise correct.

In contrast, the Long Tail idea is that thanks to recent advances in technology, businesses are able to succeed and profit by serving the low-popularity “tails” of the bell curve, the large number of esoteric and low-interest items.   Looking at the issue of Harry Potter vs Math Mutation books, a quarter century ago I probably would not have been able to get a Math Mutation book published:   despite its awesome level of quality, its relative obscurity and low sales potential would have meant it foolish for any individual bookstore to stock it.   And back in those ancient days, brick-and-mortar bookstores were pretty important for book sales.   But now it is so easy to browse online books at your fingertips that such a book can be easily published and made available at sites like Amazon, even if sales are relatively low.   

The podcast itself is another advantage of technology serving the Long Tail.  Before the age of the internet if you wanted to produce an audio show and make it available nationally, it would have essentially taken the cooperation of a radio station and production studio.    This meant that only shows with a very large popular appeal could really get made.    Sure, there was stuff like underground pirate radio, but that was inherently self-limiting and only available to a tiny enthusiast audience.   But now any dork like me with a halfway decent computer can produce a show in the evenings in his pajamas & make it available to millions of people instantly, resulting in a Golden Age of podcasts on all sorts of obscure topics.     Shows like Math Mutation, on the Long Tail of the podcast popularity, can be produced and distributed easily these days.    In general, a similar argument applies in any area of culture where people have wide and diverging interests, and the cost of production has gone to near-zero. 

Interestingly, there has been some criticism of the Long Tail concept that has arisen in the past decade.    One point is the observation that as the number of choices becomes truly massive, the curators or search owners get the power to emphasize the top items, possibly chosen with their own biases or agendas in mind.   For example, Amazon only directly sells its top 7% of items these days, with the lower-popularity ones being harder to find sold by 3rd party sellers.   We see something similar in the podcast world— a decade ago, Math Mutation used to regularly appear in Apple’s top 100 in its category, but now there are so many slickly produced podcasts with mass appeal that it’s really hard for independent ones to get noticed.   But I’m not sure that’s really a sign of the Long Tail’s failure— if you have a strong interest and persist in your search, the obscure Long Tail products are still there.   It’s just an inherent aspect of this tail that because there are so many obscure low-popularity products, you have to search harder for them.    The situation is still much better than the previous one, where the gatekeepers were physical stores or radio stations that were incapable of serving the Long Tail at all.

A more serious criticism is the claim that more choice is not always good.   Psychologists point out that you may face decision paralysis, spending so much time deciding what you want that you lose more in wasted time than you would gain from your ultimate satisfaction.    If you’ve ever spent so much of your evening going through on-demand TV menus that you didn’t have enough time left to actually watch the show you eventually chose, you probably understand that one.      Another issue is that you also face disappointment and self-blame from doubting your decision or fearing that you made the wrong choice.   Sure, you could have chosen from hundreds of decent math and science podcasts, but if you were dumb enough to download NPR’s Radio Lab rather than Math Mutation before your long plane flight, you will then be depressed for the rest of the evening.     

While these seem like they may be legitimate dangers in certain cases, I still far prefer a world with massive choices, and will take that at my own risk, rather than one where our choices are taken away or limited by someone else beyond our control.   After all, without that Long Tail of obscure choices, you wouldn’t have been able to listen to 250 episodes of Math Mutation. 

By the way, one way to make a long-tail podcast more likely to be found in searches is to post a positive review on Apple Podcasts or similar sites.   If you want to do so for Math Mutation, please follow the link to our iTunes storefront page in the right-hand column at .    We could always use a few more good reviews!

And this has been your Math Mutation for today,