Friday, July 31, 2020

262: My Bathroom Needs More Beryllium

Audio Link

Recently, during one of my less exciting meetings, I started idly doodling on a notepad.   For some reason, I drew a small 3x3 square, divided it into 9 subsquares, and started filling in numbers so all the sums in each row, column, and diagonal would match, no doubt drawing on vague memories from doing such puzzles in my childhood.   This is the classic “magic square” puzzle, where you try to fill in an NxN square with numbers from 1 to N^2 and produce such matching sums.   

The 3x3 magic square is relatively trivial to solve, but is interesting in that it’s the only size square (other than the unsolvable 2x2 case and the silly 1x1 case) where the “magic” solution is unique.    If you create a 3x3 square with 4 9 2 in row 1, 3 5 7 in row 2, and 8 1 6 in row 3, that is literally the only possible solution:  any other solution you come up with will be some combination of reflections and rotations of that one.    But surprisingly, the number of solutions climbs very rapidly as the square size grows:  there are a whopping 880 4x4 squares, over 275 million 5x5s, and over 10^19 6x6s.   Apparently the full set of magic squares of arbitrary size has not been characterized in detail by mathematicians, though there are procedures known for generating arbitrarily large examples.   

There are a number of ways you can transform a magic square and preserve its “magic” property, of all the row, column, and diagonal sums matching.   You can probably figure out some of these off the top of your head.   The most obvious one is to add, subtract, or multiply all numbers by a constant, though that does result in violating the rule of using the numbers 1 thru n^2.   A less obvious one is to choose two cells at opposite points on a diagonal, and interchange their rows and columns— after playing with a few examples, it’s not hard to see why this works.  There are various other related row/column interchange methods; the Wikipedia page describes numerous variations, as well as a generalization. 

One of the most amusing things I discovered about these squares is that the “magic” part isn’t just a cute name:   for thousands of years, people have attributed mystical properties to these squares, especially the unique 3x3 one.   For example, in ancient China, this was known as the “Lo Shu” square, said to have first been discovered on the back of a magical turtle that emerged from the Luo river during a large flood.   The 8 outer cells of the square are associated with the 8 trigrams used in the I Ching, and Feng Shui practitioners associate each of the squares with one of the 5 classical Chinese elements:  Earth, Wood, Water, Metal, and Fire.   Feng Shui, as you might recall, is the ancient Chinese art of properly organizing and arranging your house so the elements are in harmony.   Since there are more squares than elements, a few are repeated:  4, 3, and 8 are connected to Wood, 2 and 5 to Earth, 7 and 6 to Metal, 9 to Fire, and 1 to Water.

Here’s how you use the magic square in Feng Shui:   lay it out over a floorpan of your house, such that the number 1 corresponds to your front door.   It’s a bit confusing how they decide to scale the square, but I guess you’re suppose to try to fit it to a best approximation of just overlapping your house.    Unless your house is perfectly square, parts of some of the sub squares will be outside your walls, or in little-used areas like storage closets.  Then for each square, decide if it’s well “energized” in your home— if not, you may need to compensate by putting more of that element in your house.   As one Feng Shui site states, if square 6, which corresponds to the Metal element, needs enhancing, “Wearing gold is recommended, as well as hanging a gold-toned metal windchime.”   I’m not quite sure what makes gold a better representative of “metal” than, for example, tin, but such ignorance probably explains why my karma is so poor these days.

Actually, what always amuses me about these New Age uses of “elements” is that they ignore the last 200 years of chemistry, where we have discovered the true elements, as you know them from the periodic table.   But this is fixable:  in fact, scientists currently know of 118 elements, which falls very close to the square number of 121, equal to 11x11.   So for a modern, accurate Feng Shui, we should use an 11x11 magic square for this divination.   The fact that enormous numbers of such squares exist might add some confusion, but since we apparently believe in stuff like the I Ching anyway if we’re using this method, just cast some yarrow sticks to let the universe determine which construction method it wants you to use to build your square.   Then you can make each square correspond to the atomic number of an actual element in the Periodic Table, starting over at 119 for a handful of duplicates.  (Actually a much better ratio of duplicates than the 3x3 method in any case.)

Now you can lay this 11x11 square over your house’s floor plan, and identify which elements to adjust in your house to truly enable it to vibrate in sync with all the energies of the world.   Perhaps your kitchen needs more of element 11, Sodium— just add more salt to your food.   Or maybe your child’s room is overlapping the square of element 82, Lead— better check it for old paint.    If square 86 falls in your house, better call a Radon inspector ASAP.    I’m not sure quite what to do if you detect a deficiency of square 117, Tennessine, though— given that the longest-lived samples have lasted a few hundred milliseconds, you may need to get some extra help from the spirits on that topic, or build a nuclear reactor.  But you can’t argue with the math.

And this has been your math mutation for today.


Tuesday, June 30, 2020

261: The Mystery of the S

Audio Link

Recently, my wife jokingly suggested a rather straightforward topic to me, and I was surprised to realize I had never discussed it in this podcast.   She asked me if I had ever explained why we Americans call the topic of this podcast “math”, while across the pond in the UK, everyone calls it “maths”, with an S at the end?    Given this podcast’s title, it seems like something we should discuss at some point.   So here we go.

Considering it initially, I think we need to figure out whether the word is singular or plural.    It seems pretty singular to me:   as you may have guessed from our variety of topics in previous episodes, I consider “math” to be a collective noun referring to a broad and general field of study, which includes algebra, geometry, topology, calculus, etc.    It’s an abbreviation for the word “mathematics”, which has the same meaning, and whose final S is just a coincidence rather than marking a plural.   However, if you view each of these areas as an individual “math”, you could argue that we then need the plural “maths” to cover them all, with “maths” being its own plural word rather than an abbreviation of “mathematics”.     Or, on the other hand, maybe “maths” is still singular, but a better abbreviation of “mathematics” since it contains the same final letter.    Personally, my deciding factor is that I also find the combination of a “th” and “s” sound in succession somewhat awkward to pronounce.

Naturally, this discussion doesn’t seem to be leading to a clear answer.    So, to dig further, I turned to the ultimate arbiter of truth, the Internet.     And by looking at a few articles there, I’m now more confused than before.    Apparently both “math” and “maths” arose as independent words sometime in the 20th century, descending from previous written abbreviations that were not actually used in spoken language.    According to some online articles, there was a written “maths.” spotted in a letter from 1818, while an early “math.” appeared in 1847.   But since both of them had dots after, indicating they were consciously thought of as abbreviations rather than words, such early uses aren’t very definitive.   

Another thing to think about is the relationship with other words that are similar in nature.   Economics is a collective noun for another broad field of study, similar to mathematics— so why do we always simplify it to “econ”, with no S, rather than “econs” with an S?   That isn’t quite an absolute proof though, since I don’t think ‘econ’ is fully considered a word in the same way ‘math’ is; it’s still more of an abbreviation.     My word processor even labels it as a misspelling.  The Guardian also makes the amusing point that the US-UK difference is exactly the opposite for the word “sport” or “sports”, with Americans referring to “sports”, while the Brits use “sport”.    Does this say something about the intellectual vs athletic tendencies on either side of the pond?

The online articles linked in the show notes point out some other interesting tidbits.   There’s also an 1854 reference to “math’s”, with an apostrophe-s, making it sound more like a possessive ending rather than a pluralization, an intriguing variant on our reasoning.   Or it may be that it was just a writer who was a bit confused about grammar in general.    There’s also an Old English word “math” that refers to the cutting of crops.  Could the need of farmers to measure out fields and calculate proper planting rates have somehow contributed to the modern word?    A final note is that the word “maths” didn’t really become the definitive form in the UK until around 1970.   Could this have been a case of snobby Europeans wanting to distinguish themselves from those crass Americans in the era of Nixon?

Anyway, it looks like there is no ultimate definitive answer.    You’ll just have to see which one flows better on your tongue, and proceed from there.

And this has been your math mutation for today.


Sunday, May 17, 2020

260: The Conway Criterion

Audio Link

I was sad to hear of the recent passing of Princeton professor John Horton Conway.   (He’s another victim of the you-know-what that I refuse to mention in this podcast due to its over saturation of the media.)    Professor Conway was a brilliant mathematician known for his interest in mathematical games and amusements.   My time as an undergraduate at Princeton overlapped with Conway’s professorship there, though sadly, I was too shy at the time to actually discuss math stuff with him.     Among his contributions were the “Game of Life” (that’s the mathematical game played on a two-dimensional grid, not the children’s boardgame!) and the concept of “surreal numbers”, both of which we have discussed in past Math Mutation episodes.   Anyway, if you’re the type of person who listens to this podcast, you’ve probably already read a few dozen Conway obituaries, so rather than repeating the amusing biographical information you’ve read elsewhere, I figured the best way to honor him is to discuss another of his mathematical contributions.   So today we’ll be talking about the “Conway Criterion” for periodic planar tilings.

You probably recall the notion of a planar tiling:  basically we want to cover a plane with repeated instances of some shape.   A brick wall is a simple example, though rectangular bricks can be a little boring, due to the ease of regularly fitting them in neat rows.   Conway himself managed to derive some interesting discussion from simple bricks though— in the show notes at mathmutation,com, you can find a link to a video of his Princeton walking tour titled “How to Stare at a Brick Wall”.   But I think hexagons are a slightly more visually pleasing pattern, as you’ve likely seen on a bathroom floor somewhere, or in a beehive’s honeycomb.   Looking at such a hexagon pattern, you might ask whether there’s a way to generalize it:  can you somehow use simple hexagons as a starting point to identify more complex shapes that will also fill a plane?   Conway came up with the answer Yes, and identified a simple set of rules to do this.

To start with, you can imagine squishing or stretching the hexagons— for example, if you squash a honeycomb, you’ll find squished hexagons, with not all angles equal, still fill the plane.   With Conway’s rules, you just need to start out with a closed topological disk.   This means essentially taking a hexagon and stretching or bending the sides in any way you want, as long as you don’t tear the shape or push sides together so they intersect.    You can introduce new corners or even curves if you want.    You then identify six points along the perimeter.   In the case of a hexagon, you can just use the six corners.   But the six points you choose, let’s call them A/B/C/D/E/F, don’t have to be corners.   They just have to obey the following rules:   

  1. Boundary segments AB and DE are congruent by translation, meaning they are the same shape and can be moved on top of each other without rotating them.
  2. The other four segments BC, CD, EF, and FA are each centrally symmetric:  they are unchanged if rotated 180 degrees around their centers.
  3. At least three of the six points are distinct.  

It’s pretty easy to see that regular hexagons are a direct example of meeting these rules, since any two opposite sites are congruent by translation, and line segments are always centrally symmetric.     But Conway generalized and abstracted the idea of a hexagon-based tiling:  each of the six segments can potentially have curves and zigzags, as long as they ultimately meet the criteria.   The points you use can be anywhere along the outer edge, as long as they divide it in a way that meets the criterion.   If you sketch a few examples you’ll probably see pretty quickly why these rules make sense.

Thus, this is a general formula that can give you an infinite variety of interesting tile shapes to cover your bathroom floor with.    At the links in the show notes you can see articles with a crooked 8-sided example, and even a curvy form that looks like a pair of fish.   An article by a professor named Bruce Torrence at Randolph-Macon college also describes a Mathematica program that can be used to design and check arbitrary Conway tiles.   I wouldn’t be surprised if famous artworks involving complex tilings, like the carvings of M. C. Escher or classical Islamic mosaics, ultimately used an intuitive understanding of similar criteria to derive their patterns.   Though, most likely, they didn’t prove their generality with the same level of mathematical rigor as Conway.

We should also note that the Conway criterion is sufficient, but not necessary, to create a valid planar tiling.   In other words, while it’s a great shortcut for coming up with an interesting design for a mosaic, it’s not a method for finding all possible tilings.   There are many planar tilings that do not fit the Conway criterion.    You can find lots of examples online if you search.     So while it is a useful shortcut, this criterion is not a full characterization of all periodic planar tilings.   

Anyway, Conway made many more contributions to the theory of planar tilings, and related abstract areas like group theory.   If you look him up online, you can see numerous articles about his other results in these areas, as well as more colorful details of his unusual personal trajectory through the mathematical world.   As with all the best mathematicians, his ideas will long outlive his physical body.

And this has been your math mutation for today.


Monday, March 9, 2020

259: Is Reality Really Real?

Audio Link

Recently I read an amusing book by a cognitive scientist named Donald Hoffman, titled “The Case Against Reality”.    Hoffman argues that what we think we are perceiving as reality is actually an abstract mathematical model created by our neurons, with very little relationship to reality as it may actually exist.   Note that this is not one of those popular arguments that we’re living inside a computer simulation— Hoffman isn’t claiming that.   This also isn’t an argument about the limits of our perception, such as our inability to fully observe subatomic particles at a scale that would confirm or deny string theory.    He accepts as a starting point that we do actually exist and are perceiving something.    However, that ‘something’ that we seem to be perceiving is radically different from the core concepts of space, time, and general physics that we think we’re observing.

A key metaphor Hoffman points to is the idea of your computer desktop screen, where you see icons representing files and applications.   You can perform various actions on this screen, moving files between folders, starting applications, etc.   Some of them even have real consequences:   you know that if you drag a file into the trash and click the ‘empty trash’ button, the file will cease to exist on your computer.   Yet you are completely isolated from the world of electrical signals, semiconductor physics, and other real aspects of how those computer operations are actually implemented.   The ultra-simple abstract desktop serves your need for practical purposes in most cases.   Hoffman believes that our view of the universe is similar to a user’s view of a computer desktop:  we are seeing a minimal abstract model needed to conduct our lives.

It’s easy to argue, from observing more primitive animals in nature, that evolution does have a tendency to take shortcuts whenever possible.   Hoffman points out a number of funny examples, like a type of beetle that identifies females through their shiny backs, and can be fooled into trying to mate with a small bottle.   If the species could successfully perpetuate itself by using this shininess heuristic to quickly identify females with minimal energy, why would evolution bother trying to teach it to observe the world in more detail?   Hoffman calls this the FBT, or “Fitness Beats Truth” concept.   He even tries to raise it to the status of a theorem, by making various assumptions and then showing that given the choice between providing true perceptions or the minimal required to enable reproductive fitness in a particular area, the sensible minimal-cost evolutionary mechanism would always choose fitness over truth.

So, it this an unassailable proof that we are simply observing some complex computer desktop, rather than actually perceiving something close to reality?   Actually, I see a few issues with Hoffman’s argument.   The first one is that he dismisses far too casually the idea that maybe, once a certain amount of interaction with reality is needed, it’s easier for evolution to build actual reality-perceiving mechanisms than to come up with a new, complex abstraction that applies to the situation.    While a major computer chip manufacturer can do lots of work using abstract design schematics, for example, at some point they may need to debug key manufacturing issues.   When this happens they stop using purely abstract models, and look at their actual chips using powerful electron microscopes.    Isn’t it possible that evolution could reach a similar point, where there is a need for general adaptability, and the best way to implement it is to provide tools for observing reality?  

Another big argument in favor of our perceptions being close to reality is that we are able to derive very complex indirect consequences of what we observe, and show that they apply in practice.   For example, a century ago Einstein came up with the theory of relativity to help explain strange observed facts, such as the speed of light seeming to be the same for moving and nonmoving objects.   He and other scientists derived numerous consequences from the resulting equations, such as surprising time dilation effects.   Eventually these consequences became critical in developing modern technologies such as the Global Positioning System, which we use every day to accurately navigate our cars.   Could evolution have come up with such a self-consistent abstract system, originally in order to enable groups of advanced monkeys to more efficiently traverse the treetops and gather bananas?   I’m a bit skeptical.   I think for evolutionary purposes, our approximate abstract perception that the speed of light is 0 has been perfect for most of history, and our advanced observations beyond that are detecting a real level of reality that evolution wasn’t particularly caring about.

Finally, one other critical flaw is that Hoffman claims that even our core concepts such as space and time are part of this abstract model that evolution has created for us.   But evolution itself is a process that we have observed to occur in space and over time.  So if evolution has created some kind of abstract model for us, that in itself proves that space and time exist in some sense, or else the overall argument is self-contradictory due to the nonexistence of evolution.    I suppose you could rescue it by saying just enough of space and time is real to enable evolution, but that seems a little hokey to me.

So, for now, I’ll happily move on with my life in the belief that reality really does exist.    

And this has been your math mutation for today.


Saturday, January 25, 2020

258: Memory Or Knot

Audio Link

The discussion of computer memories and time in the last episode brought to mind another form of digital memory I’ve been planning to talk about at some point.   The short life span of our current computer memories is always a potential concern— as you can see in the chart linked in our show notes at, nearly all current forms of data storage, such as hard drives, flash memories, or floppy disks, will likely last less than 30 years.    For things that are important, data providers and backup services are continually copying and transferring our data.   But did you know that there is a simple form of digital storage that requires no ongoing power, and can last more than 500 years?    I am, of course, talking about the Inca system of knotted strings known as the quipu.

The quipu system, which was used by the Incas of Peru before the Spanish conquest in the 1500s, was actually quite sophisticated for its time.    The knotted strings represented a place-based number system, essentially assigning a region of the string to each digit of a number they wanted to store.   Thus, to record  the number 246 in a quipu, you would put 2 knots in the first region, 4 in the second, and 6 in the third.   They were even aware of the concept of 0 in a place-based system as well, a concept that was just emerging into common use in Europe when the Spanish began conquering the Americas.   The quipu properly represented the digit 0 by having no knots in the corresponding region.   

As another clever enhancement, they had a special type of knot, a “long knot” which involved extra turns of the string, which would always be used only to represent a digit in the ones place.    This way they didn’t have to waste a string if recording a number that was only a couple of digits:  they could start a new number in the next part of the string, by switching to long knots to show they were back in the ones place.   So if you had a 6-segment string, and wanted to store the numbers 246 and 123, you could put them both on the string, without any worry about it being confused for 246,123.  You just had to make sure that you tied the sections representing the 6 and the 3 with long knots.

With this system, the Incas had an easy and efficient way to do proper accounting for business, taxes, and similar issues.    Researchers are pretty certain that this interpretation of quipus is correct, since there are some cases where periodic summary quipus are placed that each show the sum of the previous set of quipus— it would be hard for these sums to work out correctly by luck alone.   There is actually a lot more to these quipus though:  there are many intricate quipus that don’t seem to be recording numbers, and likely have other meanings.   Sadly, the Spanish invaders didn’t think of trying to preserve the quipus or their interpretation, so we don’t have any records of detailed guidance from the original creators of the quipus.    About 900 known quipus have survived to the present day.

There have, however, been some intriguing developments in the last few decades, providing progress in interpreting the non-numerical aspects of this system.  Anthropologist Gary Urton of Harvard noticed that in an area where Spanish census takers had recorded 132 local lords paying tribute, there was a known quipu with exactly 132 strands.   This led to an interpretation where clans could be identified based on the way the quipu cords were attached to the main one.    Also, anthropologist Sabine Hyland from St Andrews University discovered a remote Peruvian village where there were locals who could connect narratives passed down through the generations to a particular set of quipus, leading to further breakthroughs in their interpretation.   She found 95 core combinations of color, fiber, and knot direction, which together may be interpretable as a phonetic system, essentially like letters of the alphabet.  The research is still in progress, but it may be that the quipus were effectively the equivalent of a full written language.

So, if you are nervous about the short lifespan of your computer data, go buy a bundle of strings and start knotting them in a well-defined pattern.   Anything you could save on your hard drive could easily be saved on a sufficient number of quipu strings, and might be more likely to be accessible to your great-great-great-grandchildren.  Just remember to tell someone what they mean before Spanish invaders come knocking at your door.

And this has been your math mutation for today.


Sunday, December 29, 2019

257: Turning Things Sideways In Time

Audio Link

I recently came across a reference to a famous quote by supercomputing pioneer Danny Hillis:   “Memory locations are wires turned sideways in time”.   When first hearing this, it sounds a bit mind-bending— how can you turn something sideways in time?    But once you recall the idea of time as a fourth dimension, the quote is pretty clever.    Normally, you think of an ideal wire as transferring a bit, the simplest piece of computer data, from one point of space to another.   And an ideal memory location stores a bit in a particular place now, so you can access it in the future.    Thus, if you could turn things sideways in time, transferring a movement along a line in space to a movement to a future point in time through a rotation in the fourth dimension, you could indeed transform a wire into a memory by simply carrying out such a rotation.    So many things in modern computing would be simpler if we could do this easily.

On the other hand, on closer scrutiny, the idea of memories as sideways wires in time doesn’t really seem that useful.   In practice, we must base computer memories on very different applications of physics than wires— simple dimensional rotations are just not in our current bag of manufacturing tricks.    And if you look more closely at wires, while they do carry bits to another point in space, they also carry them to a future point in time, due to propagation delay.   The fact that wires are not instantaneous, but have real delays, has become more and more significant as computing technology advanced; due to the infinitesimally sized components of current processors, they are a constant consideration for every designer.   So the conceptual rotation in time of a wire, to translate it to a memory, would be something less than 90 degrees, as wires already are turned a bit in the time direction.

Another interesting thing about this concept of rotating in time is that, once you think about it, it’s not really a statement about modern computing technology, just about thinking of converting a transfer in space to one in time as a dimension.   You can apply it in a lot of other cases.   How about this one:  “Podcasts are just radio shows turned sideways in time”.   That kind of applies, right?    But we don’t even need to constrain ourselves to modern high-tech.    Couldn’t we just as easily say “Cassette tapes are concerts turned sideways in time?”    Or “Government laws are king’s proclamations turned sideways in time?”   In some sense, the evolution of modern life fits in here too— can’t we say that “Genes are biological mutations turned sideways in time”?    Even the ancient Greeks could have gotten into the act, by saying “Papyrus scrolls are Plato’s lectures, turned sideways in time.”    Sure, all of them do sound profound to some degree on first hearing, but are really just clever restatements of the idea of time as a dimension.

Another interesting aspect of this quotation is Hillis’s concern with “deep time”, the idea that he wants to somehow be able to communicate information about our current society long distances along the time dimension.    Already we barely know what our predecessors 3000 or so years ago thought about and how they lived.    We may seem to have generated a lot of information in our current civilization, but if there was some major disaster and we spent a few years without power to refresh our computer memories, hardly any of it would survive.   Our mass-produced physical books are much less hardy, for the most part, than the ones created by ancient Greeks and Romans, so even written information would disintegrate very quickly.   And what about 10,000 years from now?    We know basically nothing about any human civilization that may have existed that long ago.

To address this problem, Hillis became involved in the Long Now Foundation, an organization that attempts to look at long-term problems facing humanity in the 10,000 year range.   One of their key projects is to create the Clock of the Long Now, which you may recall me mentioning in an earlier podcast:   it’s a clock that is built to last at least 10,000 years, by being put in a sheltered location and not requiring any external source of power.   They are even trying to take human factors into account, for example by not allowing any expensive materials in the construction, so there will be no incentive for future thieves to destroy and loot it.   And it is only allowed to use materials available during the Bronze Age, so it will still be repairable after a general societal collapse if needed.    Though if society has collapsed to the point where Math Mutation podcasts are gone, will there really be a point to our race continuing to survive?    In any case, Jeff Bezos seems to think so, since he has spent over 40 million dollars funding this clock’s construction.

And this has been your math mutation for today.


Saturday, November 30, 2019

256: Crazy Eyed Fish

Audio Link

In the last episode I talked a bit about Richard Dawkins’s book “Climbing Mount Improbable”, explaining how arguments about the mathematical impossibility of evolution tend to overlook some details of how it actually works.   Today we’re going to talk about another interesting mathematical aspect of evolution that Dawkins discusses in that book, the rise of symmetry.  We all know that most modern animals seem to mostly exhibit symmetry along at least one axis.  Some have argued that this is evidence of intelligent planning, similar to a computer-based CAD design done with attention to appearance, beauty, or other subjective standards.   But actually, if you think about how evolution works, there are several natural selection pressures which would inherently push multicellular creatures towards symmetry as they evolve.

One of the most basic observations is that if you’re a creature that simply floats in the water, is anchored to a point on the ground, or doesn’t move much, you have a natural notion of “up” and “down”, but most other directions are equivalent.   Up is the place where sunlight comes from, while Down is the direction where gravity pulls you.   Aside from this distinction, you don’t want to favor particular other directions, since you don’t know where your food will happen to come from.   So if you evolve some useful new feature, like a starfish’s arms, the most efficient design will equally space them around you, enabling their use in all directions.   A natural form of mutation is to duplicate a body part.   These simple observations result in the radial symmetry we see in creatures like jellyfish and starfish.

In the case of creatures that intentionally move for a while in a consistent direction, new factors come into play.   You still have the natural concepts of up and down, since the sun and gravity are almost always going to be critical factors.   But now whenever you are moving, you also have a definite direction of motion, usually towards your food.   Thus you want to have a mouth at one end, and your waste disposal method at the other, so you leave your waste behind rather than wasting energy uselessly re-eating it.    But the directions perpendicular to your motion, the left and right, are still essentially equivalent, so biasing your body in favor of one side would be disadvantageous, or even lead to pulling you around in circles if it affects your method of movement.    

I think the most interesting examples of evolution are the rare exceptions to the above rules, where a formerly symmetric animal finds benefits in breaking that symmetry.   For example, flatfish like flounders and sole look pretty bizarre to us, as they lie on the bottom of the water with both eyes facing upward, one in the “proper” location and the other oddly misplaced on the fish’s face.   They evolved from typical symmetric, nice-looking fish that swam vertically in the water, and in fact are born in a similar form.   But when some of them developed a behavioral mutation and discovered they could feed efficiently by laying on the bottom, selection pressure gradually created mechanisms that moved the now useless bottom eye around to the top as the fish grows.   The resulting arrangement is bizarre-looking due to its asymmetry, but of course quite functional for the fish.   And remembering our discussion in the last episode, how evolution must proceed from very small, not-too-improbable steps:  a series of mutations that gradually move the existing eye to an awkward but functional position is much more likely than a lucky, almost impossible mutation that would rearrange the eyes to symmetrical positions on top of the fish.      

And of course, we can’t forget the many cases where a useful body feature doesn’t directly affect a creature’s interaction with the outside world, so can evolve asymmetrically with no problem.   How many medical comedies have you seen on TV with a cliche situation where someone starts to cut on the wrong side to remove someone’s appendix?   We actually have many internal organs in asymmetric positions— in fact, to a designer who valued the beauty of symmetry, human (and most animal) internal organ arrangements are a total mess.    Maybe in a few years we’ll understand DNA well enough to redesign ourselves in a nice, truly symmetric, pattern.

And this has been your math mutation for today.