Friday, September 4, 2020

263: Asimov Vs Doyle

 Audio Link


Reading Isaac Asimov’s essay collection “The Roving Mind”, I was surprised to see some rather harsh comments aimed at Sherlock Holmes author Arthur Conan Doyle.   Asimov didn’t have any issues with the quality of the fiction, but was pointing out some scientific errors in Doyle’s writing.   Some of his most severe comments were aimed at the fact that the evil Professor Moriarty was presented as a mathematical genius, and said to have written “a treatise on the binomial theorem”.   Here are Asimov’s comments on that credential:


Moriarty was 21 years old in 1865 (it is estimated), but forty years earlier than that the Norwegian mathematician Niels Henrik Abel had fully worked out the last detail of the mathematical subject known as “the binomial theorem,” leaving Moriarty nothing to do on the matter. It was completely solved and has not advanced beyond Abel to this day.

Asimov, Isaac. The Roving Mind (p. 141). Prometheus Books - A. Kindle Edition. 


Is this really true?  Did Asimov land a slam dunk against poor Doyle, forever disproving his mathematical competence, and casting doubt on the talents of evil genius Moriarty?   Well, let’s take a closer look.


First, let’s refresh our memories on the Binomial Theorem.   Most simply viewed, this is a theorem that talks about the coefficients of the terms when you expand the expression (x + y) taken to the nth power.    For example, x + y to the 1st power has the exponents 1 1 , since there is 1 x and 1 y.   If you square x + y, you get x^2 + 2xy + y^2, so the exponents are 1 2 1.  Continuing further, if you cube it the exponents are 1 3 3 1.    If you write out these exponents in rows, each one staggered so the middle terms appear between two numbers above, you get the famous construct known as Pascal’s Triangle.  Aside from the diagonal sets of 1s going down the left and right edges, each number in the triangle is the sum of the two numbers above.   The 2 in the second row is the sum of the 2 1s above it, the 3s in the 3rd row are each the sum of a 1 and 2 above, etc.   The binomial theorem basically states that Pascal’s Triangle correctly represents the coefficients for any exponential power of (x+y).


I have, as usual, oversimplified a bit here.   If you write out a few examples, you’ll quickly see that in the form I just stated, the theorem seems to just fall out naturally from the way algebra works.   It’s pretty easy to prove by simple induction.    And in fact, limited cases have been understood since the 4th century A.D.   But once you allow non-integer or irrational exponents, the theorem becomes a lot more complex.   As Asimov stated, Niels Abel is credited with proving a generalized version of the theorem in the 1820s, as part of an amazing streak of mathematical contributions before his untimely death at the age of 27.   


But does this mean that Asimov is correct, and Moriarty could not have demonstrated his mathematical talents with a treatise on this theorem?   The most ironic aspect of Asimov’s comments, I think, is the lack of self-reflection.   After all, what are Asimov’s essay collections?   Essentially he is writing about well-established areas of science and mathematics, to improve their understanding by the general public.   Nearly every one of his essays covers facts that were discovered and/or proven many years before he wrote.   Indeed, if I comment indirectly about this exact work by writing “Asimov was a great essayist, and I enjoyed his commentary in ‘The Roving Mind’ about the binomial theorem”, would that invalidate my own credentials, as I can’t possibly praise someone’s expository writing about a well-established theorem?


Here’s one way to brush aside that quibble:  perhaps Asimov was implicitly considering popular essays about science and math, like his books or like the Math Mutation podcast, to be intellectually trivial exercises, not indicating any particular intelligence by the author.   I’ll refrain from further comment in that domain from a motive of self-preservation.   But even assuming we accept that judgement for now, I think Asimov made a deeper error here.   


In mathematics, you are never really “done”.   There are nearly always ways to improve or generalize a theorem.   Can we apply it to vectors, matrixes, or fields?   Can we apply it to general types of functions other than the algebraic expressions Abel originally considered?   Can we find analogs of the theorem that apply to powers of more complex expressions, or in higher-dimensional spaces?   While Abel may have definitively proven the theorem as originally stated, that was far from the end of possible work in related areas.    A little web searching, in fact, uncovers a 2011 paper by modern-era mathematician David Goss called “The Ongoing Binomial Revolution”, which discusses several major 20th century mathematical results that descend from the Binomial Theorem.   I’d like to provide more details, but I’m afraid most of the paper is a bit over my head.   However, it ends with a telling comment:   “Future research should lead to a deeper understanding of these recent offshoots of the Binomial Theorem as well as add many, as yet undiscovered, new ones.”


It looks like Asimov himself, while a very intelligent man and a great writer, held a somewhat naive view of mathematics.    It’s a common mistake, often made by people in applied fields who have always consumed math as well-established, fully presented theorems and formulas.   We need to recognize that a huge part of the genius of mathematics is the idea of abstracting and generalizing previous observations, and this work never really stops.   When a theorem is proven, that usually leads to even more opportunities for offshoots, generalizations, and other further expansion of knowledge.    Thus, I think we have to conclude that Isaac Asimov was wrong, and it was perfectly reasonable for an academic to write meaningfully about the Binomial Theorem many years after Abel, like Goss in his 2011 paper.     Holmes was right to fear Moriarty, as there is nothing more dangerous than an archvillian who understands mathematics.  


And this has been your math mutation for today.



References:  

Friday, July 31, 2020

262: My Bathroom Needs More Beryllium

Audio Link

Recently, during one of my less exciting meetings, I started idly doodling on a notepad.   For some reason, I drew a small 3x3 square, divided it into 9 subsquares, and started filling in numbers so all the sums in each row, column, and diagonal would match, no doubt drawing on vague memories from doing such puzzles in my childhood.   This is the classic “magic square” puzzle, where you try to fill in an NxN square with numbers from 1 to N^2 and produce such matching sums.   

The 3x3 magic square is relatively trivial to solve, but is interesting in that it’s the only size square (other than the unsolvable 2x2 case and the silly 1x1 case) where the “magic” solution is unique.    If you create a 3x3 square with 4 9 2 in row 1, 3 5 7 in row 2, and 8 1 6 in row 3, that is literally the only possible solution:  any other solution you come up with will be some combination of reflections and rotations of that one.    But surprisingly, the number of solutions climbs very rapidly as the square size grows:  there are a whopping 880 4x4 squares, over 275 million 5x5s, and over 10^19 6x6s.   Apparently the full set of magic squares of arbitrary size has not been characterized in detail by mathematicians, though there are procedures known for generating arbitrarily large examples.   

There are a number of ways you can transform a magic square and preserve its “magic” property, of all the row, column, and diagonal sums matching.   You can probably figure out some of these off the top of your head.   The most obvious one is to add, subtract, or multiply all numbers by a constant, though that does result in violating the rule of using the numbers 1 thru n^2.   A less obvious one is to choose two cells at opposite points on a diagonal, and interchange their rows and columns— after playing with a few examples, it’s not hard to see why this works.  There are various other related row/column interchange methods; the Wikipedia page describes numerous variations, as well as a generalization. 

One of the most amusing things I discovered about these squares is that the “magic” part isn’t just a cute name:   for thousands of years, people have attributed mystical properties to these squares, especially the unique 3x3 one.   For example, in ancient China, this was known as the “Lo Shu” square, said to have first been discovered on the back of a magical turtle that emerged from the Luo river during a large flood.   The 8 outer cells of the square are associated with the 8 trigrams used in the I Ching, and Feng Shui practitioners associate each of the squares with one of the 5 classical Chinese elements:  Earth, Wood, Water, Metal, and Fire.   Feng Shui, as you might recall, is the ancient Chinese art of properly organizing and arranging your house so the elements are in harmony.   Since there are more squares than elements, a few are repeated:  4, 3, and 8 are connected to Wood, 2 and 5 to Earth, 7 and 6 to Metal, 9 to Fire, and 1 to Water.

Here’s how you use the magic square in Feng Shui:   lay it out over a floorpan of your house, such that the number 1 corresponds to your front door.   It’s a bit confusing how they decide to scale the square, but I guess you’re suppose to try to fit it to a best approximation of just overlapping your house.    Unless your house is perfectly square, parts of some of the sub squares will be outside your walls, or in little-used areas like storage closets.  Then for each square, decide if it’s well “energized” in your home— if not, you may need to compensate by putting more of that element in your house.   As one Feng Shui site states, if square 6, which corresponds to the Metal element, needs enhancing, “Wearing gold is recommended, as well as hanging a gold-toned metal windchime.”   I’m not quite sure what makes gold a better representative of “metal” than, for example, tin, but such ignorance probably explains why my karma is so poor these days.

Actually, what always amuses me about these New Age uses of “elements” is that they ignore the last 200 years of chemistry, where we have discovered the true elements, as you know them from the periodic table.   But this is fixable:  in fact, scientists currently know of 118 elements, which falls very close to the square number of 121, equal to 11x11.   So for a modern, accurate Feng Shui, we should use an 11x11 magic square for this divination.   The fact that enormous numbers of such squares exist might add some confusion, but since we apparently believe in stuff like the I Ching anyway if we’re using this method, just cast some yarrow sticks to let the universe determine which construction method it wants you to use to build your square.   Then you can make each square correspond to the atomic number of an actual element in the Periodic Table, starting over at 119 for a handful of duplicates.  (Actually a much better ratio of duplicates than the 3x3 method in any case.)

Now you can lay this 11x11 square over your house’s floor plan, and identify which elements to adjust in your house to truly enable it to vibrate in sync with all the energies of the world.   Perhaps your kitchen needs more of element 11, Sodium— just add more salt to your food.   Or maybe your child’s room is overlapping the square of element 82, Lead— better check it for old paint.    If square 86 falls in your house, better call a Radon inspector ASAP.    I’m not sure quite what to do if you detect a deficiency of square 117, Tennessine, though— given that the longest-lived samples have lasted a few hundred milliseconds, you may need to get some extra help from the spirits on that topic, or build a nuclear reactor.  But you can’t argue with the math.

And this has been your math mutation for today.


References:  






Tuesday, June 30, 2020

261: The Mystery of the S

Audio Link

Recently, my wife jokingly suggested a rather straightforward topic to me, and I was surprised to realize I had never discussed it in this podcast.   She asked me if I had ever explained why we Americans call the topic of this podcast “math”, while across the pond in the UK, everyone calls it “maths”, with an S at the end?    Given this podcast’s title, it seems like something we should discuss at some point.   So here we go.

Considering it initially, I think we need to figure out whether the word is singular or plural.    It seems pretty singular to me:   as you may have guessed from our variety of topics in previous episodes, I consider “math” to be a collective noun referring to a broad and general field of study, which includes algebra, geometry, topology, calculus, etc.    It’s an abbreviation for the word “mathematics”, which has the same meaning, and whose final S is just a coincidence rather than marking a plural.   However, if you view each of these areas as an individual “math”, you could argue that we then need the plural “maths” to cover them all, with “maths” being its own plural word rather than an abbreviation of “mathematics”.     Or, on the other hand, maybe “maths” is still singular, but a better abbreviation of “mathematics” since it contains the same final letter.    Personally, my deciding factor is that I also find the combination of a “th” and “s” sound in succession somewhat awkward to pronounce.

Naturally, this discussion doesn’t seem to be leading to a clear answer.    So, to dig further, I turned to the ultimate arbiter of truth, the Internet.     And by looking at a few articles there, I’m now more confused than before.    Apparently both “math” and “maths” arose as independent words sometime in the 20th century, descending from previous written abbreviations that were not actually used in spoken language.    According to some online articles, there was a written “maths.” spotted in a letter from 1818, while an early “math.” appeared in 1847.   But since both of them had dots after, indicating they were consciously thought of as abbreviations rather than words, such early uses aren’t very definitive.   

Another thing to think about is the relationship with other words that are similar in nature.   Economics is a collective noun for another broad field of study, similar to mathematics— so why do we always simplify it to “econ”, with no S, rather than “econs” with an S?   That isn’t quite an absolute proof though, since I don’t think ‘econ’ is fully considered a word in the same way ‘math’ is; it’s still more of an abbreviation.     My word processor even labels it as a misspelling.  The Guardian also makes the amusing point that the US-UK difference is exactly the opposite for the word “sport” or “sports”, with Americans referring to “sports”, while the Brits use “sport”.    Does this say something about the intellectual vs athletic tendencies on either side of the pond?

The online articles linked in the show notes point out some other interesting tidbits.   There’s also an 1854 reference to “math’s”, with an apostrophe-s, making it sound more like a possessive ending rather than a pluralization, an intriguing variant on our reasoning.   Or it may be that it was just a writer who was a bit confused about grammar in general.    There’s also an Old English word “math” that refers to the cutting of crops.  Could the need of farmers to measure out fields and calculate proper planting rates have somehow contributed to the modern word?    A final note is that the word “maths” didn’t really become the definitive form in the UK until around 1970.   Could this have been a case of snobby Europeans wanting to distinguish themselves from those crass Americans in the era of Nixon?

Anyway, it looks like there is no ultimate definitive answer.    You’ll just have to see which one flows better on your tongue, and proceed from there.

And this has been your math mutation for today.


References:  




Sunday, May 17, 2020

260: The Conway Criterion

Audio Link

I was sad to hear of the recent passing of Princeton professor John Horton Conway.   (He’s another victim of the you-know-what that I refuse to mention in this podcast due to its over saturation of the media.)    Professor Conway was a brilliant mathematician known for his interest in mathematical games and amusements.   My time as an undergraduate at Princeton overlapped with Conway’s professorship there, though sadly, I was too shy at the time to actually discuss math stuff with him.     Among his contributions were the “Game of Life” (that’s the mathematical game played on a two-dimensional grid, not the children’s boardgame!) and the concept of “surreal numbers”, both of which we have discussed in past Math Mutation episodes.   Anyway, if you’re the type of person who listens to this podcast, you’ve probably already read a few dozen Conway obituaries, so rather than repeating the amusing biographical information you’ve read elsewhere, I figured the best way to honor him is to discuss another of his mathematical contributions.   So today we’ll be talking about the “Conway Criterion” for periodic planar tilings.

You probably recall the notion of a planar tiling:  basically we want to cover a plane with repeated instances of some shape.   A brick wall is a simple example, though rectangular bricks can be a little boring, due to the ease of regularly fitting them in neat rows.   Conway himself managed to derive some interesting discussion from simple bricks though— in the show notes at mathmutation,com, you can find a link to a video of his Princeton walking tour titled “How to Stare at a Brick Wall”.   But I think hexagons are a slightly more visually pleasing pattern, as you’ve likely seen on a bathroom floor somewhere, or in a beehive’s honeycomb.   Looking at such a hexagon pattern, you might ask whether there’s a way to generalize it:  can you somehow use simple hexagons as a starting point to identify more complex shapes that will also fill a plane?   Conway came up with the answer Yes, and identified a simple set of rules to do this.

To start with, you can imagine squishing or stretching the hexagons— for example, if you squash a honeycomb, you’ll find squished hexagons, with not all angles equal, still fill the plane.   With Conway’s rules, you just need to start out with a closed topological disk.   This means essentially taking a hexagon and stretching or bending the sides in any way you want, as long as you don’t tear the shape or push sides together so they intersect.    You can introduce new corners or even curves if you want.    You then identify six points along the perimeter.   In the case of a hexagon, you can just use the six corners.   But the six points you choose, let’s call them A/B/C/D/E/F, don’t have to be corners.   They just have to obey the following rules:   

  1. Boundary segments AB and DE are congruent by translation, meaning they are the same shape and can be moved on top of each other without rotating them.
  2. The other four segments BC, CD, EF, and FA are each centrally symmetric:  they are unchanged if rotated 180 degrees around their centers.
  3. At least three of the six points are distinct.  

It’s pretty easy to see that regular hexagons are a direct example of meeting these rules, since any two opposite sites are congruent by translation, and line segments are always centrally symmetric.     But Conway generalized and abstracted the idea of a hexagon-based tiling:  each of the six segments can potentially have curves and zigzags, as long as they ultimately meet the criteria.   The points you use can be anywhere along the outer edge, as long as they divide it in a way that meets the criterion.   If you sketch a few examples you’ll probably see pretty quickly why these rules make sense.

Thus, this is a general formula that can give you an infinite variety of interesting tile shapes to cover your bathroom floor with.    At the links in the show notes you can see articles with a crooked 8-sided example, and even a curvy form that looks like a pair of fish.   An article by a professor named Bruce Torrence at Randolph-Macon college also describes a Mathematica program that can be used to design and check arbitrary Conway tiles.   I wouldn’t be surprised if famous artworks involving complex tilings, like the carvings of M. C. Escher or classical Islamic mosaics, ultimately used an intuitive understanding of similar criteria to derive their patterns.   Though, most likely, they didn’t prove their generality with the same level of mathematical rigor as Conway.

We should also note that the Conway criterion is sufficient, but not necessary, to create a valid planar tiling.   In other words, while it’s a great shortcut for coming up with an interesting design for a mosaic, it’s not a method for finding all possible tilings.   There are many planar tilings that do not fit the Conway criterion.    You can find lots of examples online if you search.     So while it is a useful shortcut, this criterion is not a full characterization of all periodic planar tilings.   

Anyway, Conway made many more contributions to the theory of planar tilings, and related abstract areas like group theory.   If you look him up online, you can see numerous articles about his other results in these areas, as well as more colorful details of his unusual personal trajectory through the mathematical world.   As with all the best mathematicians, his ideas will long outlive his physical body.

And this has been your math mutation for today.


References:  

Monday, March 9, 2020

259: Is Reality Really Real?

Audio Link

Recently I read an amusing book by a cognitive scientist named Donald Hoffman, titled “The Case Against Reality”.    Hoffman argues that what we think we are perceiving as reality is actually an abstract mathematical model created by our neurons, with very little relationship to reality as it may actually exist.   Note that this is not one of those popular arguments that we’re living inside a computer simulation— Hoffman isn’t claiming that.   This also isn’t an argument about the limits of our perception, such as our inability to fully observe subatomic particles at a scale that would confirm or deny string theory.    He accepts as a starting point that we do actually exist and are perceiving something.    However, that ‘something’ that we seem to be perceiving is radically different from the core concepts of space, time, and general physics that we think we’re observing.

A key metaphor Hoffman points to is the idea of your computer desktop screen, where you see icons representing files and applications.   You can perform various actions on this screen, moving files between folders, starting applications, etc.   Some of them even have real consequences:   you know that if you drag a file into the trash and click the ‘empty trash’ button, the file will cease to exist on your computer.   Yet you are completely isolated from the world of electrical signals, semiconductor physics, and other real aspects of how those computer operations are actually implemented.   The ultra-simple abstract desktop serves your need for practical purposes in most cases.   Hoffman believes that our view of the universe is similar to a user’s view of a computer desktop:  we are seeing a minimal abstract model needed to conduct our lives.

It’s easy to argue, from observing more primitive animals in nature, that evolution does have a tendency to take shortcuts whenever possible.   Hoffman points out a number of funny examples, like a type of beetle that identifies females through their shiny backs, and can be fooled into trying to mate with a small bottle.   If the species could successfully perpetuate itself by using this shininess heuristic to quickly identify females with minimal energy, why would evolution bother trying to teach it to observe the world in more detail?   Hoffman calls this the FBT, or “Fitness Beats Truth” concept.   He even tries to raise it to the status of a theorem, by making various assumptions and then showing that given the choice between providing true perceptions or the minimal required to enable reproductive fitness in a particular area, the sensible minimal-cost evolutionary mechanism would always choose fitness over truth.

So, it this an unassailable proof that we are simply observing some complex computer desktop, rather than actually perceiving something close to reality?   Actually, I see a few issues with Hoffman’s argument.   The first one is that he dismisses far too casually the idea that maybe, once a certain amount of interaction with reality is needed, it’s easier for evolution to build actual reality-perceiving mechanisms than to come up with a new, complex abstraction that applies to the situation.    While a major computer chip manufacturer can do lots of work using abstract design schematics, for example, at some point they may need to debug key manufacturing issues.   When this happens they stop using purely abstract models, and look at their actual chips using powerful electron microscopes.    Isn’t it possible that evolution could reach a similar point, where there is a need for general adaptability, and the best way to implement it is to provide tools for observing reality?  

Another big argument in favor of our perceptions being close to reality is that we are able to derive very complex indirect consequences of what we observe, and show that they apply in practice.   For example, a century ago Einstein came up with the theory of relativity to help explain strange observed facts, such as the speed of light seeming to be the same for moving and nonmoving objects.   He and other scientists derived numerous consequences from the resulting equations, such as surprising time dilation effects.   Eventually these consequences became critical in developing modern technologies such as the Global Positioning System, which we use every day to accurately navigate our cars.   Could evolution have come up with such a self-consistent abstract system, originally in order to enable groups of advanced monkeys to more efficiently traverse the treetops and gather bananas?   I’m a bit skeptical.   I think for evolutionary purposes, our approximate abstract perception that the speed of light is 0 has been perfect for most of history, and our advanced observations beyond that are detecting a real level of reality that evolution wasn’t particularly caring about.

Finally, one other critical flaw is that Hoffman claims that even our core concepts such as space and time are part of this abstract model that evolution has created for us.   But evolution itself is a process that we have observed to occur in space and over time.  So if evolution has created some kind of abstract model for us, that in itself proves that space and time exist in some sense, or else the overall argument is self-contradictory due to the nonexistence of evolution.    I suppose you could rescue it by saying just enough of space and time is real to enable evolution, but that seems a little hokey to me.

So, for now, I’ll happily move on with my life in the belief that reality really does exist.    

And this has been your math mutation for today.


References:  





Saturday, January 25, 2020

258: Memory Or Knot

Audio Link


The discussion of computer memories and time in the last episode brought to mind another form of digital memory I’ve been planning to talk about at some point.   The short life span of our current computer memories is always a potential concern— as you can see in the chart linked in our show notes at mathmutation.com, nearly all current forms of data storage, such as hard drives, flash memories, or floppy disks, will likely last less than 30 years.    For things that are important, data providers and backup services are continually copying and transferring our data.   But did you know that there is a simple form of digital storage that requires no ongoing power, and can last more than 500 years?    I am, of course, talking about the Inca system of knotted strings known as the quipu.

The quipu system, which was used by the Incas of Peru before the Spanish conquest in the 1500s, was actually quite sophisticated for its time.    The knotted strings represented a place-based number system, essentially assigning a region of the string to each digit of a number they wanted to store.   Thus, to record  the number 246 in a quipu, you would put 2 knots in the first region, 4 in the second, and 6 in the third.   They were even aware of the concept of 0 in a place-based system as well, a concept that was just emerging into common use in Europe when the Spanish began conquering the Americas.   The quipu properly represented the digit 0 by having no knots in the corresponding region.   

As another clever enhancement, they had a special type of knot, a “long knot” which involved extra turns of the string, which would always be used only to represent a digit in the ones place.    This way they didn’t have to waste a string if recording a number that was only a couple of digits:  they could start a new number in the next part of the string, by switching to long knots to show they were back in the ones place.   So if you had a 6-segment string, and wanted to store the numbers 246 and 123, you could put them both on the string, without any worry about it being confused for 246,123.  You just had to make sure that you tied the sections representing the 6 and the 3 with long knots.

With this system, the Incas had an easy and efficient way to do proper accounting for business, taxes, and similar issues.    Researchers are pretty certain that this interpretation of quipus is correct, since there are some cases where periodic summary quipus are placed that each show the sum of the previous set of quipus— it would be hard for these sums to work out correctly by luck alone.   There is actually a lot more to these quipus though:  there are many intricate quipus that don’t seem to be recording numbers, and likely have other meanings.   Sadly, the Spanish invaders didn’t think of trying to preserve the quipus or their interpretation, so we don’t have any records of detailed guidance from the original creators of the quipus.    About 900 known quipus have survived to the present day.

There have, however, been some intriguing developments in the last few decades, providing progress in interpreting the non-numerical aspects of this system.  Anthropologist Gary Urton of Harvard noticed that in an area where Spanish census takers had recorded 132 local lords paying tribute, there was a known quipu with exactly 132 strands.   This led to an interpretation where clans could be identified based on the way the quipu cords were attached to the main one.    Also, anthropologist Sabine Hyland from St Andrews University discovered a remote Peruvian village where there were locals who could connect narratives passed down through the generations to a particular set of quipus, leading to further breakthroughs in their interpretation.   She found 95 core combinations of color, fiber, and knot direction, which together may be interpretable as a phonetic system, essentially like letters of the alphabet.  The research is still in progress, but it may be that the quipus were effectively the equivalent of a full written language.

So, if you are nervous about the short lifespan of your computer data, go buy a bundle of strings and start knotting them in a well-defined pattern.   Anything you could save on your hard drive could easily be saved on a sufficient number of quipu strings, and might be more likely to be accessible to your great-great-great-grandchildren.  Just remember to tell someone what they mean before Spanish invaders come knocking at your door.

And this has been your math mutation for today.


References:  




Sunday, December 29, 2019

257: Turning Things Sideways In Time

Audio Link


I recently came across a reference to a famous quote by supercomputing pioneer Danny Hillis:   “Memory locations are wires turned sideways in time”.   When first hearing this, it sounds a bit mind-bending— how can you turn something sideways in time?    But once you recall the idea of time as a fourth dimension, the quote is pretty clever.    Normally, you think of an ideal wire as transferring a bit, the simplest piece of computer data, from one point of space to another.   And an ideal memory location stores a bit in a particular place now, so you can access it in the future.    Thus, if you could turn things sideways in time, transferring a movement along a line in space to a movement to a future point in time through a rotation in the fourth dimension, you could indeed transform a wire into a memory by simply carrying out such a rotation.    So many things in modern computing would be simpler if we could do this easily.

On the other hand, on closer scrutiny, the idea of memories as sideways wires in time doesn’t really seem that useful.   In practice, we must base computer memories on very different applications of physics than wires— simple dimensional rotations are just not in our current bag of manufacturing tricks.    And if you look more closely at wires, while they do carry bits to another point in space, they also carry them to a future point in time, due to propagation delay.   The fact that wires are not instantaneous, but have real delays, has become more and more significant as computing technology advanced; due to the infinitesimally sized components of current processors, they are a constant consideration for every designer.   So the conceptual rotation in time of a wire, to translate it to a memory, would be something less than 90 degrees, as wires already are turned a bit in the time direction.

Another interesting thing about this concept of rotating in time is that, once you think about it, it’s not really a statement about modern computing technology, just about thinking of converting a transfer in space to one in time as a dimension.   You can apply it in a lot of other cases.   How about this one:  “Podcasts are just radio shows turned sideways in time”.   That kind of applies, right?    But we don’t even need to constrain ourselves to modern high-tech.    Couldn’t we just as easily say “Cassette tapes are concerts turned sideways in time?”    Or “Government laws are king’s proclamations turned sideways in time?”   In some sense, the evolution of modern life fits in here too— can’t we say that “Genes are biological mutations turned sideways in time”?    Even the ancient Greeks could have gotten into the act, by saying “Papyrus scrolls are Plato’s lectures, turned sideways in time.”    Sure, all of them do sound profound to some degree on first hearing, but are really just clever restatements of the idea of time as a dimension.

Another interesting aspect of this quotation is Hillis’s concern with “deep time”, the idea that he wants to somehow be able to communicate information about our current society long distances along the time dimension.    Already we barely know what our predecessors 3000 or so years ago thought about and how they lived.    We may seem to have generated a lot of information in our current civilization, but if there was some major disaster and we spent a few years without power to refresh our computer memories, hardly any of it would survive.   Our mass-produced physical books are much less hardy, for the most part, than the ones created by ancient Greeks and Romans, so even written information would disintegrate very quickly.   And what about 10,000 years from now?    We know basically nothing about any human civilization that may have existed that long ago.

To address this problem, Hillis became involved in the Long Now Foundation, an organization that attempts to look at long-term problems facing humanity in the 10,000 year range.   One of their key projects is to create the Clock of the Long Now, which you may recall me mentioning in an earlier podcast:   it’s a clock that is built to last at least 10,000 years, by being put in a sheltered location and not requiring any external source of power.   They are even trying to take human factors into account, for example by not allowing any expensive materials in the construction, so there will be no incentive for future thieves to destroy and loot it.   And it is only allowed to use materials available during the Bronze Age, so it will still be repairable after a general societal collapse if needed.    Though if society has collapsed to the point where Math Mutation podcasts are gone, will there really be a point to our race continuing to survive?    In any case, Jeff Bezos seems to think so, since he has spent over 40 million dollars funding this clock’s construction.

And this has been your math mutation for today.



References: