Friday, December 25, 2020

266: A Number is a Number

 Audio Link


You may recall that in our last episode we discussed the results of 19th-century attempts to rigorously define the concept of whole numbers.    These attempts culminated in the Peano axioms, a set of simple properties that defined numbers on the basis of the primitive concepts of zero and succession, plus a few related rules.   While this definition has its merits, Bertrand Russell pointed out that it also has some major flaws:  sets that we don’t think of as whole numbers, such as all numbers above 100, all even numbers, or all inverted powers of two, could also satisfy these axioms.   So, how did Russell propose to define numbers?


Here’s Russell’s definition:   “A number is anything which is the number of some class”.    Great, problem solved, we can end the podcast early today!   Or…  maybe not.   Let’s explore Russell’s concepts a bit, to figure out why this definition isn’t as circular as it seems.


The basic concept here is that we think of a whole number as a description of a class of sets, all sets which contain that number of elements.    Let’s take a look at the number 2.   The set of major US political parties, the set of Mars’s moons, and the set of my daughter’s cats are all described by the number two.   But how do we know this?   You might say we could just count the elements in each one to see they have the same number— but Russell points out that that would be cheating, since the concept of counting can only exist if we already know about whole numbers.    So what do we do?


The concept of 1-1 correspondence between sets comes to the rescue.    While we can’t count the elements of a set before we define whole numbers, we can describe similar sets:  a pair of sets are similar if their elements can be put into direct 1-1 correspondence, without any left out.   So despite lacking the intellectual power to count to 2, I can figure out that the number of my daughter’s kittens and the number of moons of Mars are the same:    I’ll map Mars’s moon Phobos to Harvey, and Mars’s moon Deimos to Freya, and see that there are no moons or cats left over.


Thus, we are now able to look at two sets and figure out if they belong in the same class of similar sets.   Russell defines the number of a class of sets as the class of all sets that are similar to it.   Personally, I think this would have been a bit clearer if Russell hadn’t chosen to overload the term ‘number’ here, using it twice with slightly different definitions.  So let’s call the class of similar sets a numerical grouping for clarity.   Then the definition we started with, “A number is anything which is the number of some class”, becomes “A number is anything which is the numerical grouping of some class”, which at least doesn’t sound quite as circular.    The wording gets a little tricky here, and I’m sure some Russell scholars might be offended at my attempt to clarify it, but I think the key concept is this:   A number is defined as a class of sets, all of which can be put into 1-1 correspondence with each other, and which, if we conventionally count them (not allowed in the definition, but used here for clarification), have that number of elements.


Is this more satisfying than the Peano axioms?    Well, if we identify zero with the empty set and the succession operation with adding an element to a set and finding its new number, we can see that those axioms are still satisfied.   Furthermore, this interpretation does seem to rule out the pathological examples Russell mentions:  the numbers greater than 100, even numbers, and inverted powers of two all fail to meet this set-based definition.    And Russell successfully used this definition as the basis for numerous further significant mathematical works.   On the other hand, Russell’s method was not the final word on the matter:   philosophers of mathematics continue to propose and debate alternate definitions of whole numbers to this day.   


Personally, I know a whole number when I see it, and maybe that’s all the definition a non-philosopher needs on a normal day.   But it’s nice to know there are people out there somewhere thinking hard about why this is true.


And this has been your math mutation for today.



References:  

Monday, November 30, 2020

265: Defining Numbers, Sort Of

 Audio Link

One of the amazing things about mathematics in general is the way we can continuously discover and prove new results on the basis of simple definitions and axioms.    But in a way, this concept is similar to those ancient myths that our planet is sitting on the back of a giant turtle.   It sits on another turtle, which sits on another, and it’s turtles all the way down.   Where’s the bottom?    In order to prove anything, we need to start from somewhere, right?    


For millennia after the dawn of mathematics, people generally assumed that the starting point had to be the whole numbers and their basic operations: addition, multiplication, etc.    How much simpler could you get than that?   But in the 19th century, philosophers and mathematicians began serious efforts at trying to improve the overall rigor of their endeavor, by defining simpler notions from which you could derive whole numbers and prove their basic properties.   The average person may not need proofs that whole numbers exist, but mathematicians can be a bit picky sometimes.   One of the most successful of these efforts was by Giuseppe Peano from Italy, who published his set of axioms in 1889.


The Peano axioms can be stated in several equivalent forms, but for the moment we’ll use the version in Bertrand Russell’s nice “Introduction to Mathematical Philosophy”.    As Russell states it, they are based on three primitive notions and 5 axioms.    The notions are the concepts of zero, number, and successor.   Note that he’s not saying we start by knowing what all the numbers are, just that we are assuming some set exists which we are calling “numbers”.  Based on these, the 5 axioms are:

  1. Zero is a number.
  2. The successor of any number is a number.
  3. No two numbers have the same successor.
  4. Zero is not the successor of any number.   (Remember that we’re just defining whole numbers here; negative numbers will potentially be a later extension to the system.)
  5. Induction works:   if some property P belongs to 0, and we can prove that if P is true for some number, it’s true for its successor, then P is true for all numbers.   


With these primitive notions, we can derive the existence of all the whole numbers, without having them known at the start.    For example, we know Zero has a successor which is a number, so let’s label that 1.   Then 1 has a successor number as well, so let’s call that 2, and so on.     We can also define the basic operations we’re used to, simply building on these axioms:  for example, let’s define addition.    We’ll create a plus operation, and define “a + 0” as equal to a for any number a.    Then we can define “a + the successor of b” as “the successor of (a+b)”.    So, for example, a + 1 equals a + “the successor of 0”, which becomes “the successor of a + 0”; by our original definition, this boils down to “the successor of a”.   Thus we have shown that the operation a+1 always leads to a’s successor, a good sign that we are using a reasonable definition of addition.     We can similarly define other operations such as multiplication and inequalities.


The Peano axioms were quite successful and useful, and a great influence on the progression of the foundations of mathematics.    Yet Russell points out that they had a few key flaws.   Think again about the ideas we started with:  zero, successors, and induction.   They certainly apply to the natural numbers… but could they apply to other things, that are not what we would think of as the set of natural numbers?     The answer is yes— there are numerous other sets that can satisfy the axioms.    


- One example:  let’s just define our “zero” for these axioms as the conventional whole number 100.   Then what is being described is the set of whole numbers above 100.   If you think about it for a minute, this won’t violate any of Peano’s axioms—  we are still defining a set of unique numbers with successors, none of which is below our “zero”, and to which induction applies.  

- As another example, let’s keep zero as our conventional zero, but define the “successor” operation as adding 2.   Now our axioms describe the set of even whole numbers.  A very useful set, indeed, but not the true set of whole numbers we were aiming to describe.

- As an even more absurd case, let’s define our “zero” as the conventional number 1, and the successor operation as division by 2.   Then we are describing the infinite progression 1, 1/2, 1/4, 1/8, and so on.    Another very useful series, but not at all matching our intention of describing the whole numbers.


Choosing alternate interpretations of this set, naturally, can lead to very weird interpretations of our derived operations such as addition and multiplication.   But Russell’s point is that despite the power and utility of Peano’s axioms, there is clearly something lacking.    This situation actually reminds me a bit of some challenges we encounter in my day job, proving that computer chip designs work correctly:   it’s nice if you can prove that your axioms lead to desired conclusions for your design, but you also need evidence that there aren’t also BAD designs that would satisfy those axioms equally well.   If such bad designs do exist, your job isn’t quite done.


Russell’s answer to this issue was to seek improved approaches to defining whole numbers based on set theory, which would more precisely correspond to our notion of what these numbers really are.    We’ll discuss this topic in a future podcast.    


And this has been your math mutation for today.



References:  


Sunday, October 11, 2020

264: Unbalanced Society

Audio Link

You may recall that in several past episodes I mentioned the odd pop philosopher Alfred Korzybski and his early 20th-century movement known as General Semantics.   Korzybski believed that the imprecision and misuse of language was responsible for many of society’s ills.   He came up with many supposedly practical ideas to fix this such as using “indexing” and “dating” to add numerical tags to objects you reference, and minimizing the use of the verb “to be” due to its many possible meanings.    A few weeks ago I discovered that one of his books, “Manhood of Humanity”, was downloadable for free at Project Gutenberg, and couldn’t resist taking a look to see if there were any more amusing ideas there.   And I did find one:  a supposed mathematical explanation for the many societal upheavals and conflicts of the past century.


Basically, Korzybski was looking at the pace of change in various fields of knowledge that we have been acquiring over time.   He made much of his observation that we are what he calls “time-binding” creatures:   unlike any other creature on the planet, we can learn things and pass them down to our descendants, so the process of learning and development happens at the level of human society, rather than just of individuals.    According to him, the simple, natural way most knowledge accumulates is through linear or arithmetic progressions:   a series like 2, 4, 6, 8, 10, where you steadily move forward by small jumps.   However, there are certain domains, in areas of science and technology, where in recent centuries every piece of new knowledge has drawn on massive amounts of previous ideas, creating a geometric, or exponential, progression, like 2, 4, 8, 16, 32, etc.    This disconnect is the source of many of our problems.  


In other words, early historical growth in social and political areas roughly tracked with growth of technology, but now technology has zoomed ahead of our social development due to this disconnect.   And this cannot be good for humanity as a whole.   As Korzybski stated, “It is plain as the noon-day sun that, if progress in one of the matters advances according to the law of a geometric progression and the other in accordance with a law of an arithmetical progression, progress in the former matter will very quickly and ever more rapidly outstrip progress in the latter, so that, if the two interests would be interdependent (as they always are), a strain is gradually produced in human affairs, social equilibrium is at length destroyed; there follows a period of readjustment by violence and force.”   He then goes on to state that this is a key cause of insurrections, revolutions, and wars.   


This idea seems like it might actually have something to it, to some degree.    But we do have to be careful here— while everyone is always focused on their own time, and our news media love to sensationalize wars and violence, there has been violence and war throughout the history of society.   Many argue that the real oddity of modern times is the proportion of humanity who can live out their lives secure from violence.     Korzybski did write this just after World War I, though, so we can understand why it might have looked to him like society was falling apart.   


Where he really went off the rails though is when he tried to prescribe a solution for this disconnect in societal vs technological growth:   everyone must use his system of more precise and scientifically defined language, correctly defining ideas like “good”, “bad”, and “truth”, and then we will enable exponential growth in all fields of knowledge.   In his words:  “If only these three words could be scientifically defined, philosophy, law, ethics, and psychology would cease to be private theories or verbalism and they would advance to the rank and dignity of sciences.”     He even claimed that such correct definitions would have lead to the kind of scientific societal reasoning that could have predicted and prevented World War I.    


But when he tried to actually apply his reasoning to a practical matter, we see some slightly more concerning comments.    He tried to use the need to come up with a common base to combine like terms to derive the need for a government to unite the people.   Just as you cannot combine algebraic terms like x^a + y^b without finding  common base, you must find the people a “common base” to unite them.   He wrote, “Germany united the powers of living men and women and children; it gave them a common base; it gave them one common social mood and aim; they all became consolidated in service of that which is called the State… they worked, lived, and died for the State.”    He seemed to like this idea, only complaining that the German leadership then chose the wrong aims for their “united terms”.   


This concept of making the social sciences more precise and mathematical seems to appear continuously in 20th-century writing, from many authors.   It always ultimately fails:  aside from the inherent imprecision of the concepts involved,  there is an obvious need for value judgements that cannot be sensibly derived from any mathematics.    In the end, Korzybski provided a few intriguing ideas, buried within loads and loads of sophistry and nonsense.   I always find it amusing to read this kind of stuff, as long as we all remember not to take it too seriously.    If we want to solve modern society’s problems, we can’t just lie back & let the math provide a magic formula.


And this has been your math mutation for today.



References:  






Friday, September 4, 2020

263: Asimov Vs Doyle

 Audio Link


Reading Isaac Asimov’s essay collection “The Roving Mind”, I was surprised to see some rather harsh comments aimed at Sherlock Holmes author Arthur Conan Doyle.   Asimov didn’t have any issues with the quality of the fiction, but was pointing out some scientific errors in Doyle’s writing.   Some of his most severe comments were aimed at the fact that the evil Professor Moriarty was presented as a mathematical genius, and said to have written “a treatise on the binomial theorem”.   Here are Asimov’s comments on that credential:


Moriarty was 21 years old in 1865 (it is estimated), but forty years earlier than that the Norwegian mathematician Niels Henrik Abel had fully worked out the last detail of the mathematical subject known as “the binomial theorem,” leaving Moriarty nothing to do on the matter. It was completely solved and has not advanced beyond Abel to this day.

Asimov, Isaac. The Roving Mind (p. 141). Prometheus Books - A. Kindle Edition. 


Is this really true?  Did Asimov land a slam dunk against poor Doyle, forever disproving his mathematical competence, and casting doubt on the talents of evil genius Moriarty?   Well, let’s take a closer look.


First, let’s refresh our memories on the Binomial Theorem.   Most simply viewed, this is a theorem that talks about the coefficients of the terms when you expand the expression (x + y) taken to the nth power.    For example, x + y to the 1st power has the exponents 1 1 , since there is 1 x and 1 y.   If you square x + y, you get x^2 + 2xy + y^2, so the exponents are 1 2 1.  Continuing further, if you cube it the exponents are 1 3 3 1.    If you write out these exponents in rows, each one staggered so the middle terms appear between two numbers above, you get the famous construct known as Pascal’s Triangle.  Aside from the diagonal sets of 1s going down the left and right edges, each number in the triangle is the sum of the two numbers above.   The 2 in the second row is the sum of the 2 1s above it, the 3s in the 3rd row are each the sum of a 1 and 2 above, etc.   The binomial theorem basically states that Pascal’s Triangle correctly represents the coefficients for any exponential power of (x+y).


I have, as usual, oversimplified a bit here.   If you write out a few examples, you’ll quickly see that in the form I just stated, the theorem seems to just fall out naturally from the way algebra works.   It’s pretty easy to prove by simple induction.    And in fact, limited cases have been understood since the 4th century A.D.   But once you allow non-integer or irrational exponents, the theorem becomes a lot more complex.   As Asimov stated, Niels Abel is credited with proving a generalized version of the theorem in the 1820s, as part of an amazing streak of mathematical contributions before his untimely death at the age of 27.   


But does this mean that Asimov is correct, and Moriarty could not have demonstrated his mathematical talents with a treatise on this theorem?   The most ironic aspect of Asimov’s comments, I think, is the lack of self-reflection.   After all, what are Asimov’s essay collections?   Essentially he is writing about well-established areas of science and mathematics, to improve their understanding by the general public.   Nearly every one of his essays covers facts that were discovered and/or proven many years before he wrote.   Indeed, if I comment indirectly about this exact work by writing “Asimov was a great essayist, and I enjoyed his commentary in ‘The Roving Mind’ about the binomial theorem”, would that invalidate my own credentials, as I can’t possibly praise someone’s expository writing about a well-established theorem?


Here’s one way to brush aside that quibble:  perhaps Asimov was implicitly considering popular essays about science and math, like his books or like the Math Mutation podcast, to be intellectually trivial exercises, not indicating any particular intelligence by the author.   I’ll refrain from further comment in that domain from a motive of self-preservation.   But even assuming we accept that judgement for now, I think Asimov made a deeper error here.   


In mathematics, you are never really “done”.   There are nearly always ways to improve or generalize a theorem.   Can we apply it to vectors, matrixes, or fields?   Can we apply it to general types of functions other than the algebraic expressions Abel originally considered?   Can we find analogs of the theorem that apply to powers of more complex expressions, or in higher-dimensional spaces?   While Abel may have definitively proven the theorem as originally stated, that was far from the end of possible work in related areas.    A little web searching, in fact, uncovers a 2011 paper by modern-era mathematician David Goss called “The Ongoing Binomial Revolution”, which discusses several major 20th century mathematical results that descend from the Binomial Theorem.   I’d like to provide more details, but I’m afraid most of the paper is a bit over my head.   However, it ends with a telling comment:   “Future research should lead to a deeper understanding of these recent offshoots of the Binomial Theorem as well as add many, as yet undiscovered, new ones.”


It looks like Asimov himself, while a very intelligent man and a great writer, held a somewhat naive view of mathematics.    It’s a common mistake, often made by people in applied fields who have always consumed math as well-established, fully presented theorems and formulas.   We need to recognize that a huge part of the genius of mathematics is the idea of abstracting and generalizing previous observations, and this work never really stops.   When a theorem is proven, that usually leads to even more opportunities for offshoots, generalizations, and other further expansion of knowledge.    Thus, I think we have to conclude that Isaac Asimov was wrong, and it was perfectly reasonable for an academic to write meaningfully about the Binomial Theorem many years after Abel, like Goss in his 2011 paper.     Holmes was right to fear Moriarty, as there is nothing more dangerous than an archvillian who understands mathematics.  


And this has been your math mutation for today.



References:  

Friday, July 31, 2020

262: My Bathroom Needs More Beryllium

Audio Link

Recently, during one of my less exciting meetings, I started idly doodling on a notepad.   For some reason, I drew a small 3x3 square, divided it into 9 subsquares, and started filling in numbers so all the sums in each row, column, and diagonal would match, no doubt drawing on vague memories from doing such puzzles in my childhood.   This is the classic “magic square” puzzle, where you try to fill in an NxN square with numbers from 1 to N^2 and produce such matching sums.   

The 3x3 magic square is relatively trivial to solve, but is interesting in that it’s the only size square (other than the unsolvable 2x2 case and the silly 1x1 case) where the “magic” solution is unique.    If you create a 3x3 square with 4 9 2 in row 1, 3 5 7 in row 2, and 8 1 6 in row 3, that is literally the only possible solution:  any other solution you come up with will be some combination of reflections and rotations of that one.    But surprisingly, the number of solutions climbs very rapidly as the square size grows:  there are a whopping 880 4x4 squares, over 275 million 5x5s, and over 10^19 6x6s.   Apparently the full set of magic squares of arbitrary size has not been characterized in detail by mathematicians, though there are procedures known for generating arbitrarily large examples.   

There are a number of ways you can transform a magic square and preserve its “magic” property, of all the row, column, and diagonal sums matching.   You can probably figure out some of these off the top of your head.   The most obvious one is to add, subtract, or multiply all numbers by a constant, though that does result in violating the rule of using the numbers 1 thru n^2.   A less obvious one is to choose two cells at opposite points on a diagonal, and interchange their rows and columns— after playing with a few examples, it’s not hard to see why this works.  There are various other related row/column interchange methods; the Wikipedia page describes numerous variations, as well as a generalization. 

One of the most amusing things I discovered about these squares is that the “magic” part isn’t just a cute name:   for thousands of years, people have attributed mystical properties to these squares, especially the unique 3x3 one.   For example, in ancient China, this was known as the “Lo Shu” square, said to have first been discovered on the back of a magical turtle that emerged from the Luo river during a large flood.   The 8 outer cells of the square are associated with the 8 trigrams used in the I Ching, and Feng Shui practitioners associate each of the squares with one of the 5 classical Chinese elements:  Earth, Wood, Water, Metal, and Fire.   Feng Shui, as you might recall, is the ancient Chinese art of properly organizing and arranging your house so the elements are in harmony.   Since there are more squares than elements, a few are repeated:  4, 3, and 8 are connected to Wood, 2 and 5 to Earth, 7 and 6 to Metal, 9 to Fire, and 1 to Water.

Here’s how you use the magic square in Feng Shui:   lay it out over a floorpan of your house, such that the number 1 corresponds to your front door.   It’s a bit confusing how they decide to scale the square, but I guess you’re suppose to try to fit it to a best approximation of just overlapping your house.    Unless your house is perfectly square, parts of some of the sub squares will be outside your walls, or in little-used areas like storage closets.  Then for each square, decide if it’s well “energized” in your home— if not, you may need to compensate by putting more of that element in your house.   As one Feng Shui site states, if square 6, which corresponds to the Metal element, needs enhancing, “Wearing gold is recommended, as well as hanging a gold-toned metal windchime.”   I’m not quite sure what makes gold a better representative of “metal” than, for example, tin, but such ignorance probably explains why my karma is so poor these days.

Actually, what always amuses me about these New Age uses of “elements” is that they ignore the last 200 years of chemistry, where we have discovered the true elements, as you know them from the periodic table.   But this is fixable:  in fact, scientists currently know of 118 elements, which falls very close to the square number of 121, equal to 11x11.   So for a modern, accurate Feng Shui, we should use an 11x11 magic square for this divination.   The fact that enormous numbers of such squares exist might add some confusion, but since we apparently believe in stuff like the I Ching anyway if we’re using this method, just cast some yarrow sticks to let the universe determine which construction method it wants you to use to build your square.   Then you can make each square correspond to the atomic number of an actual element in the Periodic Table, starting over at 119 for a handful of duplicates.  (Actually a much better ratio of duplicates than the 3x3 method in any case.)

Now you can lay this 11x11 square over your house’s floor plan, and identify which elements to adjust in your house to truly enable it to vibrate in sync with all the energies of the world.   Perhaps your kitchen needs more of element 11, Sodium— just add more salt to your food.   Or maybe your child’s room is overlapping the square of element 82, Lead— better check it for old paint.    If square 86 falls in your house, better call a Radon inspector ASAP.    I’m not sure quite what to do if you detect a deficiency of square 117, Tennessine, though— given that the longest-lived samples have lasted a few hundred milliseconds, you may need to get some extra help from the spirits on that topic, or build a nuclear reactor.  But you can’t argue with the math.

And this has been your math mutation for today.


References:  






Tuesday, June 30, 2020

261: The Mystery of the S

Audio Link

Recently, my wife jokingly suggested a rather straightforward topic to me, and I was surprised to realize I had never discussed it in this podcast.   She asked me if I had ever explained why we Americans call the topic of this podcast “math”, while across the pond in the UK, everyone calls it “maths”, with an S at the end?    Given this podcast’s title, it seems like something we should discuss at some point.   So here we go.

Considering it initially, I think we need to figure out whether the word is singular or plural.    It seems pretty singular to me:   as you may have guessed from our variety of topics in previous episodes, I consider “math” to be a collective noun referring to a broad and general field of study, which includes algebra, geometry, topology, calculus, etc.    It’s an abbreviation for the word “mathematics”, which has the same meaning, and whose final S is just a coincidence rather than marking a plural.   However, if you view each of these areas as an individual “math”, you could argue that we then need the plural “maths” to cover them all, with “maths” being its own plural word rather than an abbreviation of “mathematics”.     Or, on the other hand, maybe “maths” is still singular, but a better abbreviation of “mathematics” since it contains the same final letter.    Personally, my deciding factor is that I also find the combination of a “th” and “s” sound in succession somewhat awkward to pronounce.

Naturally, this discussion doesn’t seem to be leading to a clear answer.    So, to dig further, I turned to the ultimate arbiter of truth, the Internet.     And by looking at a few articles there, I’m now more confused than before.    Apparently both “math” and “maths” arose as independent words sometime in the 20th century, descending from previous written abbreviations that were not actually used in spoken language.    According to some online articles, there was a written “maths.” spotted in a letter from 1818, while an early “math.” appeared in 1847.   But since both of them had dots after, indicating they were consciously thought of as abbreviations rather than words, such early uses aren’t very definitive.   

Another thing to think about is the relationship with other words that are similar in nature.   Economics is a collective noun for another broad field of study, similar to mathematics— so why do we always simplify it to “econ”, with no S, rather than “econs” with an S?   That isn’t quite an absolute proof though, since I don’t think ‘econ’ is fully considered a word in the same way ‘math’ is; it’s still more of an abbreviation.     My word processor even labels it as a misspelling.  The Guardian also makes the amusing point that the US-UK difference is exactly the opposite for the word “sport” or “sports”, with Americans referring to “sports”, while the Brits use “sport”.    Does this say something about the intellectual vs athletic tendencies on either side of the pond?

The online articles linked in the show notes point out some other interesting tidbits.   There’s also an 1854 reference to “math’s”, with an apostrophe-s, making it sound more like a possessive ending rather than a pluralization, an intriguing variant on our reasoning.   Or it may be that it was just a writer who was a bit confused about grammar in general.    There’s also an Old English word “math” that refers to the cutting of crops.  Could the need of farmers to measure out fields and calculate proper planting rates have somehow contributed to the modern word?    A final note is that the word “maths” didn’t really become the definitive form in the UK until around 1970.   Could this have been a case of snobby Europeans wanting to distinguish themselves from those crass Americans in the era of Nixon?

Anyway, it looks like there is no ultimate definitive answer.    You’ll just have to see which one flows better on your tongue, and proceed from there.

And this has been your math mutation for today.


References:  




Sunday, May 17, 2020

260: The Conway Criterion

Audio Link

I was sad to hear of the recent passing of Princeton professor John Horton Conway.   (He’s another victim of the you-know-what that I refuse to mention in this podcast due to its over saturation of the media.)    Professor Conway was a brilliant mathematician known for his interest in mathematical games and amusements.   My time as an undergraduate at Princeton overlapped with Conway’s professorship there, though sadly, I was too shy at the time to actually discuss math stuff with him.     Among his contributions were the “Game of Life” (that’s the mathematical game played on a two-dimensional grid, not the children’s boardgame!) and the concept of “surreal numbers”, both of which we have discussed in past Math Mutation episodes.   Anyway, if you’re the type of person who listens to this podcast, you’ve probably already read a few dozen Conway obituaries, so rather than repeating the amusing biographical information you’ve read elsewhere, I figured the best way to honor him is to discuss another of his mathematical contributions.   So today we’ll be talking about the “Conway Criterion” for periodic planar tilings.

You probably recall the notion of a planar tiling:  basically we want to cover a plane with repeated instances of some shape.   A brick wall is a simple example, though rectangular bricks can be a little boring, due to the ease of regularly fitting them in neat rows.   Conway himself managed to derive some interesting discussion from simple bricks though— in the show notes at mathmutation,com, you can find a link to a video of his Princeton walking tour titled “How to Stare at a Brick Wall”.   But I think hexagons are a slightly more visually pleasing pattern, as you’ve likely seen on a bathroom floor somewhere, or in a beehive’s honeycomb.   Looking at such a hexagon pattern, you might ask whether there’s a way to generalize it:  can you somehow use simple hexagons as a starting point to identify more complex shapes that will also fill a plane?   Conway came up with the answer Yes, and identified a simple set of rules to do this.

To start with, you can imagine squishing or stretching the hexagons— for example, if you squash a honeycomb, you’ll find squished hexagons, with not all angles equal, still fill the plane.   With Conway’s rules, you just need to start out with a closed topological disk.   This means essentially taking a hexagon and stretching or bending the sides in any way you want, as long as you don’t tear the shape or push sides together so they intersect.    You can introduce new corners or even curves if you want.    You then identify six points along the perimeter.   In the case of a hexagon, you can just use the six corners.   But the six points you choose, let’s call them A/B/C/D/E/F, don’t have to be corners.   They just have to obey the following rules:   

  1. Boundary segments AB and DE are congruent by translation, meaning they are the same shape and can be moved on top of each other without rotating them.
  2. The other four segments BC, CD, EF, and FA are each centrally symmetric:  they are unchanged if rotated 180 degrees around their centers.
  3. At least three of the six points are distinct.  

It’s pretty easy to see that regular hexagons are a direct example of meeting these rules, since any two opposite sites are congruent by translation, and line segments are always centrally symmetric.     But Conway generalized and abstracted the idea of a hexagon-based tiling:  each of the six segments can potentially have curves and zigzags, as long as they ultimately meet the criteria.   The points you use can be anywhere along the outer edge, as long as they divide it in a way that meets the criterion.   If you sketch a few examples you’ll probably see pretty quickly why these rules make sense.

Thus, this is a general formula that can give you an infinite variety of interesting tile shapes to cover your bathroom floor with.    At the links in the show notes you can see articles with a crooked 8-sided example, and even a curvy form that looks like a pair of fish.   An article by a professor named Bruce Torrence at Randolph-Macon college also describes a Mathematica program that can be used to design and check arbitrary Conway tiles.   I wouldn’t be surprised if famous artworks involving complex tilings, like the carvings of M. C. Escher or classical Islamic mosaics, ultimately used an intuitive understanding of similar criteria to derive their patterns.   Though, most likely, they didn’t prove their generality with the same level of mathematical rigor as Conway.

We should also note that the Conway criterion is sufficient, but not necessary, to create a valid planar tiling.   In other words, while it’s a great shortcut for coming up with an interesting design for a mosaic, it’s not a method for finding all possible tilings.   There are many planar tilings that do not fit the Conway criterion.    You can find lots of examples online if you search.     So while it is a useful shortcut, this criterion is not a full characterization of all periodic planar tilings.   

Anyway, Conway made many more contributions to the theory of planar tilings, and related abstract areas like group theory.   If you look him up online, you can see numerous articles about his other results in these areas, as well as more colorful details of his unusual personal trajectory through the mathematical world.   As with all the best mathematicians, his ideas will long outlive his physical body.

And this has been your math mutation for today.


References: