Wednesday, December 27, 2023

288: One Stone To Rule Them All

 Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.


With the end of 2023 approaching, I spent some time browsing the internet for interesting math news from this year, and realized there was one big unexpected breakthrough by an amateur that really deserves discussion:  the aperiodic monotile.    While that term sounds exotic, it actually covers a very simple concept.   We’ve talked about planar tilings before:  basically, this just means a shape of tile that can cover an infinite version of your bathroom floor.   Your current bathroom probably has a bunch of square or hexagonal tiles, which completely fill the available space.   Well, maybe there are a few practical issues at the edges, but if you imagine the floor to be infinite, those shapes could fill it forever.   If you want to fill your floor with a set of more interesting shapes, you could imagine drawing arbitrary squiggly lines in a hexagon to divide it into several smaller tiles of arbitrary irregularity, and thus these smaller tiles would cover your floor as well.   But all these coverings are periodic:  they consist of an infinite repetition of some small pattern at known intervals.   In contrast, an aperiodic tiling would still cover your floor with copies of a few basic shapes, but the pattern would not show this kind of regular repetition.


When the idea first came up in the early 1960s, logician Hao Wang had hypothesized that no aperiodic tiling existed.   But he was proven wrong within a few years by his student Robert Berger, who discovered a set of tiles that would indeed cover the plane aperiodically.   His initial set was very complicated, consisting of over 20,000 tiles.   But this led to a surge of interest in the topic, and by 1974 Roger Penrose had published a simple example of a pair of 4-sided tiles, known as the “kite” and “dart”, that could cover the plane aperiodically with just their two shapes.   These were shown to be examples of an infinite variety of 2-tile sets that enabled such tilings.   And that is essentially where the problem stood, for almost 50 years, with nobody knowing whether a single tile could cover a plane aperiodically.   Bizarrely, during this long wait, chemists discovered that this mathematical game actually had an application in the real world, where these aperiodic tilings formed the basis of “quasicrystals”, real crystalline structures.   Quasicrystals have been used for applications like nonstick cookware and reinforcing steel, and led to the 2011 Nobel Prize in Chemistry awarded to Israeli scientist Dan Shechtman.  


Throughout this half century, many mathematicians wondered if there might be a single tile, rather than a set, that could cover the plane aperiodically.   The problem was nicknamed the “Einstein” problem, not because of any relation to the famous physicist, but as a pun, with the words “Ein Stein” being German for “One Stone”.    In 2010 Socolar and Taylor got close, publishing a single tile that could solve this problem— but their tile was unconnected, consisting of a central bumpy hexagon and six floating double-square shapes that orbited it in known positions.   A nice breakthrough, but arguably not a single tile, depending on how you define the concept.   Other researchers found a different solution that was contiguous, but only worked if some small overlaps were allowed; also an interesting result, but not really a satisfying solution to the Einstein problem. 


The real solution finally came earlier this year, published by Smith, Myers, Kaplan, and Goodman-Strauss:  a single tile that would actually cover the plane without regularly repeating a pattern.   It was formed by stitching together eight kite shapes into something that looked like an odd-shaped hat:  not trivial, but surprisingly simple for something that had evaded discovery for almost fifty years.   They also proved that this was just a representative of an infinite family of shapes with this property.    You can see it illustrated in some of the articles linked in the show notes at mathmutation.com.


The most surprising thing about this discovery was the fact that it was discovered by an amateur— it didn’t stem from efforts by a professional mathematician.   David Smith is a retired print technician who enjoys jigsaw puzzles and playing with shapes.   He was using a commonly available software package called the PolyForm puzzle solver, seeing what kind of interesting patterns he could make with different shaped tiles.   When he saw that the patterns with his experimental “hat” tile looked especially unusual, he decided to cut out a bunch of this tile out of cardboard and start experimenting by hand.    Convincing himself that he had found something significant, he contacted a professor he knew, Craig Kaplan of the University of Waterloo.  Together, they investigated further, and after Kaplan recruited two more colleagues to assist with the proof, they confirmed that Smith really had, after all these years, found an Einstein tile.  


As a result, if you’ve been thinking of remodeling your bathroom, there is now a nice new floor decoration option you should seriously consider. 


And this has been your math mutation for today.



References:  

https://cs.uwaterloo.ca/~csk/hat/

https://aperiodical.com/2023/03/an-aperiodic-monotile-exists/

https://hedraweb.wordpress.com/2023/03/23/its-a-shape-jim-but-not-as-we-know-it/

https://en.wikipedia.org/wiki/Aperiodic_tiling

https://en.wikipedia.org/wiki/Penrose_tiling

https://en.wikipedia.org/wiki/Quasicrystal

https://www.quantamagazine.org/hobbyist-finds-maths-elusive-einstein-tile-20230404/

https://maxwelldemon.com/2010/04/01/socolar_taylor_aperiodic_tile/



Monday, October 9, 2023

287: The Grim State of Modern Pizza

 Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.


You’ve probably read or heard at some point about the “replication crisis”, and the related epidemic of scientific fraud, discovered over the past few decades.  Researchers at many institutions and universities have been accused of modifying or making up data to support their desired results, after detailed analysis determined that the reported numbers were inconsistent is subtle ways.   Or maybe you haven’t heard about this— the major media have been sadly deficient in paying attention to these stories.   Most amusingly, earlier this year, such allegations were made against famous Harvard professor Francesca Gino, who has written endless papers on honesty and ethics.   


Usually these allegations come examining past papers and their associated datasets, and performing various statistical tests to figure out if the numbers have a very low probability of being reasonable.   For example, in an earlier podcast we discussed Benford’s Law, a subtle property of leading digits in large datasets, which has been known for many years now.   But all these complex tests have overlooked a basic property of data that, in hindsight, seems so simple a grade-school child could have invented it.   Finally published in 2016 by Brown and Heathers, this is known as the “granularity-related inconsistency of means” test, or GRIM test for short.


Here’s the basic idea behind the GRIM test.   If you are averaging a bunch of numbers, there are only certain values possible in the final digits, based on the value you are dividing into the sum of the numbers.    For example, suppose I tell you I asked 10 people to rate Math Mutation on a scale of 1-100, with only whole number values allowed, no decimals.   Then I report the average result as “95.337”, indicating the incredible public appreciation of my podcast.   Sounds great, doesn’t it?   


But if you think about it, something is fishy here.   I supposedly got some integer total from those 10 people, and divided it by 10, and got 95.337.   Exactly what is the integer you can divide by 10 to get 95.337?    Of course there is none— there should be at most one digit past the decimal when you divide by 10!   For other numbers, there are wider selections of decimals possible; but in general, if you know you got a bunch of whole numbers and divided by a specific whole number, you can determine the possible averages.   That’s the GRIM test— checking if the digits in an average (or similar calculation) are consistent with the claimed data.   What’s really cool about this test is that, unlike the many statistical tests that check for low probabilities of given results, the GRIM test is absolute:  if a paper fails it, there’s a 100% chance that its reported numbers are inconsistent.


Now you would think this issue is so obvious that nobody would be dumb enough to publish results that fail the GRIM test.   But there you would be wrong:  when they first published about this test, Brown and Heathers applied it to 71 recent papers from major psychology journals, and 36 showed at least one GRIM failure.   The GRIM test also played a major role in exposing problems with the famous “pizza studies” at Cornell’s Food and Brand Lab, which claimed to discover surprising relationships between variables such as the price of pizza, size of slices, male or female accompaniment, and the amount eaten.   Sounds like a silly topic, but this research had real-world effects, leading to lab director Brian Wansink’s appointment to major USDA committees & helping to shape US military “healthy eating” programs.    Wansink ended up retracting 15 papers, though insisting that all the issues were honest mistakes or sloppiness rather than fraud.   Tragically, humanity then had to revert to our primitive 20th-century understanding of the nature of pizza consumption.


Brown and Heathers are careful to point out that a GRIM test failure doesn’t necessarily indicate fraud.   Perhaps the descriptions of research methods in a particular paper ignore some detail, like a certain number of participants being disqualified for some legitimate reason and reducing the actual divisor, that would change the GRIM analysis.   In other cases the authors have offered excuses like mistakes by low-paid assistants, simple accidents with the notes, typos, or other similar boo-boos short of intentional fraud.  But the whole point of scientific papers is to convincingly describe the results in a way that others could reproduce it— so I don’t think these explanations fully let the authors off the hook.   And these tests don’t even include the many cases of uncooperative authors who refuse to send researchers like Brown and Heathers their original data.   Thus it seems clear that the large number of GRIM failures, and retracted papers as a result of this and similar tests, indicate a serious problem with the way research is coordinated, published, and rewarded in modern academia.


And this has been your math mutation for today.



References:  


Sunday, August 27, 2023

286: Adventures In Translation

 Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.


You may recall that in several earlier podcasts, 252 and 125, I discussed the famous math writer Douglas Hofstadter’s surprising venture into translating poetry from French and Russian into English.   Hofstadter is best known for his popular books that explore the boundaries of math and cognitive science, such as “Godel Escher Bach” and “Metamagical Themas”.   I found his discussions of translation challenges fascinating:  you would think that translation should be a simple mathematical mapping, a 1-1 correspondence of words from one language into another, but it’s actually much more than that.   Aside from different typical structures and speech patterns between languages, every word sits in a cloud of miscellaneous connotations and relationships with other concepts, so you need to really think about the right way to convey the author’s original intentions.   This is why tools like Google Translate can give you the general idea of what a passage says, but very rarely create a translated sentence that sounds real and natural.    


After reading Hofstadter’s description of his translation work, I thought it might be interesting to translate something myself, though no obvious candidates came to mind,   But in the past few years, I became acquainted with a recently escaped Cuban dissident, Nelson Rodriguez Chartrand, who had just published a memoir in Spanish.   Curious to read this memoir, and with no English translation having been planned by the author, I volunteered to take this on myself.  Now I’ll be the first to admit I’m not fluent in Spanish, though I spent four years studying it in high school, but I had the advantage of direct access to the author to help clarify areas of confusion, as well as the vast general resources of the Internet.   This also was probably a much easier translation challenge than Hofstadter’s in general, since this memoir was in prose, so no need to worry about things like rhyme and meter.    I also had some decent knowledge of context, having read numerous memoirs by emigres from Communist countries in the past, as well as interviewing several including Nelson.  Thus, I went ahead and began translating.


Now, some of you might think this is trivially easy these days due to Google Translate, but as I mentioned before, that tool does not create very good text on its own.   You may recall the popular amusement when it first came out, where you translate a small passage across several languages and back to English, and chuckle at how ridiculous it ends up looking.     It is useful, however, as a first step:  as I approached each paragraph, I used that tool to create a “gloss”, an initial awkward translation to use as a starting point.   From there, I would try to understand the core concepts being expressed, and try to rewrite the sentences in more natural sounding English.   I would figure this out with a combination of reviewing the original Spanish text, researching some alternate word translations online, and consulting with Nelson in the harder cases.    The most entertaining challenges were the cases where the initial version simply didn’t make sense at all.


One simple example was a children’s cheer, “Fidel, Fidel, que tiene Fidel, que los imperialistas no pueden con él”, which Google literally translates as “Fidel, Fidel, that Fidel has, that the imperialists cannot with him”.   That doesn’t make much sense initially.   After consulting a few sources, I settled on the translation “Fidel, Fidel, what Fidel has, the imperialists cannot overcome”, which probably expresses the core intention a bit better.   It could also be a question, asking “Fidel, Fidel, what does Fidel have, that the imperialists cannot overcome?”  I toyed with the idea of taking more liberties and trying to make.a rhyming chant like the original Spanish, something like “Fidel, Fidel, what Fidel has, makes the imperialist look like an ass”, similar to some of the liberties Hofstadter describes in his poetry translation efforts.   But while that might make it a more effective chant, I was hesitant about making the translation’s tone a bit too lighthearted.


My favorite translation challenges were the ones that made me laugh out loud when first reading the Google translated version.   For example, when describing what he liked to do in the evenings as a child, Nelson wrote a phrase that Google translated as “watching the American dolls”.     Was life in Cuba really so boring that people enjoyed sitting and starting at dolls for hours?   This also conjured up images of those Chucky horror movies— maybe he was trying to make sure the Yankee imperialist dolls didn’t come to life and start chasing poor Communists with a knife?   Things in Cuba could be worse than I imagined.   As you would expect, with some help from the author, I eventually figured out what he was trying to say— he was talking about watching cartoons on TV.    


So, in the end, was my translation any good?   Nelson did a basic check by Google Translating each chapter back to Spanish after I delivered the draft— of course not very good Spanish, but at least sufficient to confirm that I got the general idea.   While I’m sure a professional Spanish-fluent translator would have done better, the most important goal was to make Nelson’s story available in English, and I’m confident we at least achieved that.   We were able to get the translation published, with some aid from a small foundation called the Liberty Sentinels Fund, and it is now available at Amazon and other online booksellers.    You can order it using the link in the show notes at mathmutation.com or just search for it on Amazon, and judge my translation for yourself.      The book is called “The Revolution of Promises”, by Nelson Rodriguez Chartrand.   If you like it, a good review on Amazon would also be helpful.


And this has been your math mutation for today.


References:  

















Thursday, June 29, 2023

285: Believe My Proofs Or Else

 Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.


Before I start, I’d like to thank listener ‘katenmkate’ for posting an updated review of Math Mutation on Apple Podcasts.   You’ve probably observed that the pace of new episodes has been slowing a bit, but seeing a nice review always helps to get me motivated!


Recently I saw an amusing XKCD cartoon online that reminded me of a term I hadn’t heard in a while, “proof by intimidation”.   In the cartoon, a teacher explains, “The Axiom of Choice allows you to select one element from each set in a collection…. And have it executed as an example to the others.”    The caption underneath reads “My math teacher was a big believer in proof by intimidation.”    A nicely absurd joke— wouldn’t it be great if we could just intimidate the numbers and variables into making our proofs work out?    “Proof by intimidation” is a real term though.   To me, it conjures up an image of Tony Soprano knocking at your door, baseball bat in hand, telling you to deliver your proofs or face an unfortunate accident.   But according to the Wikipedia page, the term arose after some lectures in the 1930s by a mathematician named William Feller:  “He took umbrage when someone interrupted his lecturing by pointing out some glaring mistake. He became red in the face and raised his voice, often to full shouting range. It was reported that on occasion he had asked the objector to leave the classroom.”


These certainly match with the image that the term itself tends to conjure initially in our brains, but the real meaning is slightly less extreme.   As Wikipedia explains it, proof by intimidation “is a jocular phrase used mainly in mathematics to refer to a specific form of hand-waving, whereby one attempts to advance an argument by marking it as obvious or trivial, or by giving an argument loaded with jargon and obscure results.”    For example, while skipping a difficult step in proving some complex theorem, you might say, “You know the Zorac Theorem of Hyperbolic Manifold Theory, right?”   Chances are that the listener doesn’t know that theorem, and is too embarrassed about it to try to bring up & expose your missing argument.


As you would expect, that type of argument certainly makes a proof invalid in many cases.   But there is one interesting wrinkle that most of the online explanations skip over.  Often, stating that something is obvious, or trivially follows from well-known results, is a very important part of a valid mathematical argument or paper   It simply wouldn’t be feasible to publish modern math, science, or engineering results (or at least, to publish them in an article of reasonable length) if you could not use the building blocks provided by past researchers without re-creating them.   

 

I remember when I started my first advanced math class as a freshman in college, the TA gave us some guidelines for what he expected in our homework, and one of them was that “This is obvious” or “This trivially follows” ARE perfectly acceptable— as long as they are accurate.   Apparently his burdens of grading homework were significantly complicated in the past by students who felt they had to justify the foundations of algebra in every answer.     For example, if for some step in a proof you need to say, “Let y be a prime number greater than x”, you don’t need to embed a copy of Euler’s proof of the existence of arbitrarily large prime numbers.     However, encouraging students to use the word “obvious” in our homework did leave him open to bluff attempts through proof by intimidation— he had to keep on his toes to make sure we really were only using it for trivial steps, rather than actually cutting larger corners in our proofs.   In one assignment that I was a bit rushed for, I actually did say “And obviously, …” while skipping over the hardest part of a problem.   The TA, giving me 0 points, commented, “If this was really obvious, we wouldn’t have bothered to assign the problem in your homework!”


Proof by intimidation is just one representative of a family of related logical fallacies.   A humorous list by someone named Dana Angluin has been circulating the net for a while, with methods including “proof by vigorous handwaving”, “proof by cumbersome notation”, “proof by exhaustion” (meaning making the reader tired, not the mathematically valid proof by exhaustion method), “proof by obfuscation”, “proof by eminent authority”, “proof by importance”, “proof by appeal to intuition”, “proof by reduction to the wrong problem”, and of course the classic “proof by reference to inaccessible literature”.   As I read these, I think a lot of them seem to be common these days outside the domain of mathematics…. Though since this podcast isn’t focused on theology, media or politics, I’ll stay out of that quagmire.


And this has been your math mutation for today.


References:  

Sunday, April 30, 2023

284: Don't Square Numbers, Triangle Them

 Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.


As I mentioned in the last episode, recently I read an unusual book, “A Fuller Explanation”, that attempts to explain the mathematical philosophy and ideas of Buckminster Fuller.  You probably know Fuller’s name due to his popularization of geodesic domes, those sphere-like structures built of a large number of connected triangles, which evenly distribute forces to make the dome self-supporting.   His unique approach to mathematics, “synergetics”, probably didn’t make enough true mathematical contributions to justify all the new terminology he created, but does look at many conventional ideas in different ways and through unusual philosophical lenses, which likely helped him to derive new and original ideas in architecture.   Today we’ll discuss another of his key points, his central focus on triangles as a key foundation of mathematics.


While most people who are conventionally educated tend to primarily think of geometry in terms of right angles and square coordinate systems, Fuller considered triangles to be a more fundamental element.   This follows directly from their essential ability to distribute forces without deforming.   Think about it for a minute:  if you apply pressure to the corner of a 4-sided figure, such as a square, its angles can be distorted to something different without changing the lengths of the sides:   for example, you can squish it into a parallelogram.   Triangles don’t have this flaw:  assuming the sides are rigid and well-fastened together, you can’t distort it at all.   This stems from the elementary geometric fact that the lengths of the three sides fully determine the triangle:  for any 3 side lengths, there is only one possible set of angles to go with them and make a triangle.  Thus, if you are building something in the real world, triangles and related 3-dimensional shapes naturally result in strong self-supporting structures.


According to one of the many legends of Fuller’s life, he first discovered the strength of this type of construction when he was a child in kindergarten.   His teacher had given the class an exercise where they were to build a small model house out of toothpicks and semi-dried peas.  At the time, Fuller’s parents had not yet discovered his need for glasses, so he was nearly blind and could not see what his classmates were doing at all.   As he recalled it, “All the other kids, the minute they were told to make structures, immediately tried to imitate houses.   I couldn’t see, so I felt.   And a triangle felt great!”  Basing his decisions on how the strength of the building felt as he was constructing it, he ended up developing a triangle-based complex of tetrahedra and octahedra.   The unusual look accompanied by the unexpected solidity and strength of his little house surprised his teachers and classmates, and many years later was patented by Fuller as the “Octet Truss”.    Fuller wasn’t the first architect to discover the strength of triangles, of course, but he put a lot more thought into extending them to useful 3-D structures than many had in the past.  


With the strong memory of such early discoveries, it’s probably not surprising that Fuller decided he wanted to build an entire mathematical system based on triangles.   He took this philosophy into all areas of mathematics that he looked at, sometimes to a bit of an extreme.   For example, a fundamental building block of algebra is looking at higher powers of numbers, starting with squaring.   But Fuller considered the operation of “triangling” much more important.  Of course this is a useful aspect of general algebra:  you might recall that the triangular numbers represent the number of units it takes to form an equilateral triangle:  start with 1 ball, add a row of 2 under it, add a row of 3 under that, etc.   So the series of triangular numbers start with  1, 3, 6, 10, and so on, representing the value ((n squared - n) / 2) for each n.   Fuller considered these numbers especially significant because they also happen to represent the set of pairwise relationships between n items.   For example, if you have 4 people and want to connect phone lines so any 2 can always talk to each other, you need 6 phone lines, the 3rd triangular number.   For 5 people you need 10, the 4th triangular number, etc.   Fuller was pleased that this abstract concept of the number of mutual relationships had such a simple geometric counterpart.   This observation helped to guide him in many of his further explorations related to constructing strong 3-dimensional patterns that could be used in construction.


So, are you convinced yet that triangles are important?   Of course these have been a fundamental element of mathematics for thousands of years, but it’s been relatively rare for people to view them and think about them exactly the way Fuller did.   That’s probably why, despite exploring areas of mathematics that had been relatively well-covered in the past, he still was able to make original and useful observations that led to significant practical results.   It’s a nice reminder that just because other people have already examined some area of mathematics, it doesn’t mean that there aren’t more fascinating discoveries waiting just under the surface to be uncovered.


And this has been your math mutation for today.

 


References:  




Sunday, February 26, 2023

283: Escape From Infinity

 Audio Link

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.


Recently I read an interesting book, “A Fuller Explanation”, that attempts to explain the mathematical philosophy and ideas of Buckminster Fuller, written by a former student and colleague of his named Amy Edmondson.  Fuller is probably best known as the architect who popularized geodesic domes, those sphere-like structures built of a large number of connected triangles, which evenly distribute forces to make the dome self-supporting.   He had a unique approach to mathematics, defining a system of math, physics, and philosophy called “synergetics”, which coined a lot of new terms for what was essentially the geometry of 3-dimensional solids.   It’s a very difficult read, which is what led to the need for Edmonson’s book.


One of the basic insights guiding Fuller was the distrust of the casual use of infinities and infinitesimals throughout standard mathematics.   He would not accept the definition of infinitely small points or infinitely thin lines, as these cannot exist in the physical world.    He was also bothered by the infinite digits of pi, which are needed to understand a perfect sphere.   Pi isn’t equal to 3.14, or 3.142, or 3.14159, but keeps needing more digits forever.   If a soap bubble is spherical, how does nature know when to stop expanding the digits?    Of course, in real life, we know a soap bubble isn’t truly spherical, as it’s made of a finite number of connected atoms.    Or as Fuller would term it, it’s a “mesh of energy events interrelated by a network of tiny vectors.”   But can you really do mathematics without accepting the idea of infinity?


With a bit of googling, I was surprised to find that Fuller was not alone in his distrust of infinity:   there’s a school of mathematics, known as “finitism”, that does not accept any reasoning that involves the existence of infinite quantities.  At first, you might think this rules out many types of mathematics you learned in school.   Don’t we know that the infinite series 1/2 + 1/4 + 1/8 … sums up to 1?   And don’t we measure the areas of curves in calculus by adding up infinite numbers of infinitesimals?   


Actually, we don’t depend on infinity for these basic concepts as much as you might think.   If you look at the precise definitions used in many areas of modern mathematics, we are talking about limits— the convenient description of these things as infinites is really a shortcut.   For example, what are we really saying when we claim the infinite series 1/2+1/4+1/8+… adds up to 1?    What we are saying is that if you want to get a value close to 1 within any given margin, I can tell you a number of terms to add that will get you there.   If you want to get within 25% of 1, add the first two terms, 1/2+1/4.   If you want to get within 15%, you need to add the three terms 1/2+1/4+1/8.    And so on.   (This is essentially the famous “epsilon-delta” proof method you may remember from high school.).   Thus we are never really adding infinite sums, the ‘1’ is just a marker indicating a value we can get arbitrarily close to by adding enough terms. 


You might object that this doesn’t cover certain other critical uses of infinity:  for example, how would we handle Zeno’s paradoxes?    You may recall that one of Zeno’s classic paradoxes says that to cross a street, we must first go 1/2 way across, then 1/4 of the distance, and so on, adding to an infinite number of tasks to do.   Since we can’t do an infinite number of tasks in real life, motion is impossible.    The traditional way to resolve it is by saying that this infinite series 1/2+1/4+… adds up to 1.    But if a finitist says we can’t do an infinite number of things, are we stuck?   Actually no— since the finitist also denies infinitesimal quantities, there is some lower limit to how much we can subdivide the distance.   After a certain amount of dividing by 2, we reach some minimum allowable distance.   This is not as outlandish as it seems, since physics does give us the Planck Length, about 10^-35 meters, which some interpret as the pixel size of the universe.   If we have to stop dividing by 2 at some point, Zeno’s issue goes away, as we are now adding a finite (but very large) number of tiny steps which take  a correspondingly tiny time each to complete.    Calculating distances using the sum of an infinite series again becomes  just a limit-based approximation, not a true use of infinity.


Thus, you can perform most practical real-world mathematics without a true dependence on infinity.    One place where finitists do diverge from the rest of mathematics is in reasoning about infinity itself, or the various types of infinities.  You may recall, for example, Cantor’s ‘diagonal’ argument which proves that the infinity of real numbers is greater than the infinity of integers, leading to the idea of a hierarchy of infinities.   A finitist would consider this argument pointless, having no applicability to real life, even if it does logically follow from Cantor’s premises.


In Fuller’s case, this refusal to accept infinities had some positive results.    By focusing his attention on viewing spheres as a mesh of finite elements and balanced force vectors, this probably set him on the path of understanding geodesic domes, which became his major architectural accomplishment.     As Edmondson describes it, Fuller’s alternate ways of looking at standard areas of mathematics enabled him and his followers to circumvent previous rigid assumptions and open “rusty mental gates that block discovery”.    Fuller’s views were also full of other odd mathematical quirks, such as the idea that we should be “triangling” instead of “squaring” numbers when wanting to look at higher dimensions; maybe we will discuss those in a future episode.



And this has been your math mutation for today.

 


References:  




Tuesday, December 20, 2022

282: The Man With The Map

 Audio Link 

Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our headquarters in the suburbs of Wichita, Kansas, this is Erik Seligman, your host.  And now, on to the math.


I was surprised to read about the recent passing of Maurice Karnaugh, a pioneering mathematician and engineer from the early days of computing.   Karnaugh originally earned his PhD from Yale, and went on to a distinguished career at Bell Labs and IBM, also becoming an IEEE Fellow in 1976.   My surprise came from the fact that he was still alive so recently:  he was born in 1924, and his key contributions were in the 1950s and 1960s, so I had assumed he died years ago.   In any case, to honor his memory, I thought it might be fun to look at one of his key contributions:  the Karnaugh Map, known to generations of engineering students as the K-map for short.


So, what is a K-map?   Basically, it’s a way of depicting the definition of a Boolean function, that is a function that takes a bunch of inputs and generates an output, with all inputs and the output being a Boolean value, that is either 0 or 1.    As you probably know, such functions are fundamental to the design of computer chips and related devices.   When trying to design an electronic circuit schematic that implements such a function, you usually want to try to find a minimum set of basic logic gates, primarily AND, OR, and NOT gates, that defines it.   


For example, suppose you have a function that takes 4 inputs, A, B, C, and D, and outputs a 1 only if both A and B are true, or both C and D are true.   You can basically implement this with 3 gates:   (A AND B) , (C AND D), and an OR gate to look at those two results, outputting a 1 if either succeeded.   But often when defining such a function, you’re initially given a truth table, a table that lists every possible combination of inputs and the resulting output.   With 4 variables, the truth table would have 2^4, or 16, rows, 7 of which show an output of 1.   A naive translation of such a truth table directly to a circuit would result in one or more gates for every row of the table, so by default you would generate a much larger circuit than necessary.   The cool thing about a K-map is that even though it’s mathematically trivial— it actually just rewrites the 2^n lines of the truth table in a 2^(n/2) x 2^(n^2) square format— it makes a major difference in enabling humans to draw efficient schematics.


So how did Karnaugh help here?   The key insight of the K-map is to define a different shape for the truth table, one that conveys the same information, but in a way that the human eye can easily find a near-minimal set of gates that would implement the desired circuit.   First, we make the table two-dimensional, by grouping half the variables for the rows, and half for the columns.   So there would be one row for AB = 00,  one row for AB = 01, etc, and a column for CD=00, another for CD=01, etc.   This doesn’t actually change the amount of information:  for each row in the original truth table, there is now a (row, column) pair, leading to a corresponding entry in the two-dimensional K-map.   Instead of 16 rows, we now have 4 rows and 4 columns, specifying the outputs for the same 16 input combinations.


The second clever trick is to order each set of rows and columns according to a Gray code— that is, an ordering such that each pair of inputs only differs from the previous pair in one bit.   So rather than the conventional numerical ordering of 00, 01, 10, 11, corresponding to our ordinary base-10 of 0, 1, 2, 3 in order, we sort the rows as 00, 01, 11, 10.   These are out of order, but the fact that only one bit is changing at a time makes the combinations more convenient to visually analyze.


One you have created this two-dimensional truth table with the gray code ordering, it has the very nice property that if you can spot rectangular patterns, they correspond to boolean expressions, or minterms, that enable an efficient representation of the function in terms of logic gates.  In our example, we would see that the row for AB=11 contains a 4x1 rectangle of 1s, and the column of CD=11 contains a 1x4 rectangle of 1s, leading us directly to the (A AND B) OR (C AND D) solution.    Of course, the details are a bit messy to convey in an audio podcast, but you can see more involved illustrations in online sources like the Wikipedia page in the show notes.   But the most important point is that in this two-dimensional truth table, you can now generate a minimal-gate representation by spotting rectangles of 1s, greatly enhancing the efficiency of your circuit designs.   


Over the years, as with many things in computer science, K-maps have faded in significance.  This is because the power of our electronic design software has grown exponentially:  these days, virtually nobody hand-draws a K-map to minimize a circuit.  Circuit synthesis software directly looks at high-level definitions like truth tables, and does a much better job at coming up with minimal gate implementations than any person could do by hand.   Some of the techniques used by this software relate to K-maps, but of course many more complex algorithms, most of which could not be effectively executed without a computer, have been developed in the intervening decades.   Despite this, Karnaugh’s contribution was a critical enabler in the early days of computer chip design, and the K-map is still remembered by generations of mathematics and computer science students.


And this has been your math mutation for today.

 


References: