Sunday, July 30, 2017

232: Overcooked Bacon

Audio Link

You’ve almost certainly heard of the “Six Degrees of Separation” phenomenon, sometimes known whimsically as the “Six Degrees of Kevin Bacon”, where it supposedly takes only an average of 6 connections to reach any person on the planet from any other.    In the case of the common Kevin Bacon parlor game, you name any actor, and he’s acted in a movie with someone who acted with someone (dot dot dot), and after naming less than six movies, you can always reach Kevin Bacon.     For example, let’s start with Kristen Bell, one of our greatest living actresses due to her role as the Hexagon in Flatland The Movie.   For her, we just need two steps:   she was in Big Miracle with Maury Ginsberg, and Ginsberg was in My One and Only with Kevin Bacon.  In earlier podcasts, I’ve mentioned that mathematicians often think of this in the slightly geekier terms of paper authorship and Paul Erdos.    Although this concept is well known, its truth is not actually as well-established as you might think.   I’m not referring to the well-documented fact that numerous actors are better-connected than Kevin Bacon— perhaps his lack of math podcasting has damaged his career in recent years— but to the general idea of connecting any two people in six steps.

Wikipedia mentions several early precedents for the Six Degrees concept, starting with Hungarian author Frigyes Karinthy in the 1920s.   However, it really got its kickstart in recent times from a Psychology Today article in 1967, where Stanley Milgram described an experimental test of the theory.   He gave volunteers assignments to send a letter to random people around the U.S., with the rule that they could only send it to people they knew on a first-name basis, who then had to forward it on under similar restrictions.    He found that on average, it took only six steps of forwarding for the letters to reach their targets.   As you would expect, news about the surprising result spread rapidly.    However, many years later, a writer named Judith Kleinfeld looked up Milgram’s original raw data, and was surprised at what she saw.   In the Psychology Today article, Milgram failed to report that less than 30 percent of the letters ever made it to their destination— the six degrees were only measured in the small minority that succeeded, making the overall result much less impressive.   Sure, perhaps there were legitimate reasons for some failures— people in the middle of the chain may have been suspicious of being asked to forward a random item to a stranger— but it still makes the reported result very questionable.

To think about why the Six Degrees idea might be true or false, it’s useful to abstract the question into a problem in graph theory.    This means we should think of people as dots, or vertices, on a large piece of paper, with connections between them symbolized by edges between any pair of vertices.    If we’re representing people on Earth, there are around six billion vertices.   We’re asking the question:  to get from any vertex A to any other vertex B, how many edges do we need to traverse?    If we assume each person has about 1000 acquaintances, then in one step we can reach 1000 vertices, in two steps 1000x1000 or one million, and in three steps one billion.   So we would expect to reach anyone on the planet in an average of fewer than four connections, making the Six Degrees idea actually seem a bit pessimistic.

But something seems too easy about this random graph model.   Are any two random vertices really equally likely to be connected?   Am I just as likely to know an arbitrary computer geek from Oregon as an arbitrary goat farmer from Cambodia?   It seems that we should somehow be able to account for being more likely to know people in a similar location, job, social class, etc.   A nice article on the “Mathematics Illuminated” website discusses some alternative models.   As an opposite extreme to the random graph, suppose we arrange the vertices in a large circle, where every vertex is connected to only its thousand nearest neighbors.   Now to get from one vertex to the one on the opposite side of the circle, who is about 3 billion vertices away, we need 3 billion over 5 hundred, or six million connections.   Definitely more pessimistic than our original model.   

But this is an extremely negative model:  while we all know many more people among our neighbors, most of us certainly do know some people who are well-connected in far-off locations.   If we just allow a few of these long distance connections, we significantly reduce the number of hops needed.  For example, suppose we designate 1000 equally spaced “world traveler”  vertices, representing people who are well-connected in global organizations, that are all connected to each other.  Then, to get anywhere, we just have to traverse the average of 3 million people to get to the nearest long-distance edge, cross it, and visit the same number of people on the other side— reducing the average traversal from 6 million connections to about 12000.     

This leads to what mathematicians call a “small world” graph, where most vertices are connected to a large number of neighbors, but a few “hub” vertices have a lot of distant connections.   You can think of this like an airline routing map:   to get to a distant city, you first fly to a nearby hub, then make a long-distance flight from there to another hub, and finally from that hub you can reach your destination.   It’s probably easy for you to think of some “hub” people in your life.   In my case, while most people I talk to daily are from Oregon, my old friend Ruthe graduated from Oxford and worked until last year in the Obama white house— so thru her, I probably have a very short path to numerous prominent politicians, for better or worse.   As we have seen, these hub connections drastically reduce the hops needed to connect any two vertices.   The addition of 1000 world travelers to the circular graph actually strikes me as a major underestimate of likely real-life long-distance connections, making the original idea of Six Degrees of Separation start to seem quite reasonable.

We should add, of course, that Milgram’s flawed experiments were not the final word on the topic.   There have been many similar efforts over the years, and this experiment has gotten easier as the world has become more interconnected in trackable ways, through the growth of the Internet.   Microsoft did an experiment in 2006 analyzing users of their Messenger network, and found an average of 6.6 hops to connect any two people, with a worst case number of 29.   So Milgram may have been right after all, despite the problems with his trials.   On the other hand, the number might be noticeably higher for the non-electronically-gifted portion of the world population— would that hypothetical Cambodian goat farmer be on the internet, or know someone who is?    Hopefully soon the internet will be bringing this podcast (as well as personal connections) to every human being on the planet, but we’re not quite there yet.

And this has been your math mutation for today.




References:






Sunday, June 25, 2017

231: A New Dimension in Housing



You may recall several earlier podcasts in which we discussed the works of Buckminster Fuller, the visionary futurist and architect of the mid-20th century.   He is best known for his discovery of the geodesic dome, and has been memorialized in the name of the chemical “fuillerene”, an arrangement of 60 carbon atoms with a similar structure.    He also was a popular, if somewhat eccentric, speaker on the idea of “Spaceship Earth”, offering various feel-good prescriptions for enabling the future survival of the human race.   A critical component of his philosophy was a new “synergetic” geometry based on 60-degree rather than 90-degree angles, which would surely lead to new ways of looking at the world.    But recently I’ve been reading a new biography of Fuller’s early years, “Becoming Bucky Fuller” by Loretta Lorance, and was surprised to learn that despite his own later descriptions of his early life, Fuller did not originally set out to be a futurist or visionary, or to save the world through a philosophical revolution.   Originally, he was simply trying to start a successful company, and hoping to follow in the footsteps of industrialists like Henry Ford.

Fuller’s first significant job in the 1920s, after leaving the army, was with his father-in-law marketing the “Stockade System”, a clever design of reusable wood-composite blocks for construction purposes.   The idea was to standardize construction on a precision-manufactured type of block, with holes in each block that could be lined up to pour in large quantities of concrete to add structural stability.    This could improve construction efficiency and reduce the cost of buildings.   This company eventually failed, but it gave Fuller a more ambitious idea.   Why not try to mass-produce full houses, rather than the component bricks?    He compared the concepts to Ford’s mass production of cars, in contrast with the cumbersome process of getting a house built.  “What would happen if a person, seeking to purchase an automobile, had to hire a designer, then send the plans out for bid, then show them to the bank, and then have them approved by the town council, all before work on the vehicle could begin?”   Cars would be far more expensive, and would only be affordable to the wealthiest citizens.   By mass-producing houses like cars, private homes would be within reach at much lower income levels.

Fuller attempted several versions of the design of this house, thinking of the practicalities of mass production.  However, his inclination to have it supported by a central, cylindrical metal frame supporting an overall circular or hexagonal design was very unusual for the time, and probably appeared to most people like something out of science fiction.   He also concentrated as much on the philosophy of his new houses as he did on the actual design, marketing it in a pamphlet called “4D Timelock”, implicitly linking his project with new scientific developments regarding the fourth dimension.   There were several reasons why Fuller considered his new designs to be four-dimensional.  First, there was the push for temporal efficiency of construction, to a greater degree than past architecture, incorporating the dimension of Time.   Then, there was supposed built-in longevity to his designs, due to the use of superior materials and techniques, again incorporating the idea of Time from the beginning.   Finally, he claimed that using advanced geometric concepts, like radiating spheres and trigonometry, was a key component that integrated all dimensions.   I find that last claim a bit odd, since those concepts are clearly part of three-dimensional geometry, but was unable to locate an online copy of Fuller’s original pamphlet to check the claim in more detail.

Fuller’s attempts to get investors to actually fund this new concept were, unfortunately, not very successful.   He initially made the mistake of trying to unveil it at a national architects’ convention— one which was dominated by a fear of mass-production and a movement among architects to make sure that every design remained custom and unique.    He then sent out many copies of his “4D Timelock” to potential investors, but while he received some fascinated replies, he got very little money.   Thinking the “4D” concept might be scaring some people off, he changed the marketing name to “Dymaxion”, combining “dynamism”, “maximum”, and “ions”.    His first big break was when he got permission to display a model at the Marshall Field department store in Chicago, and the public were intrigued by the bizarre design.   

Word-of-mouth led to other opportunities for him to display and talk about his ideas.  He exhibited and spoke about the Dymaxion house throughout the 1930s, as well as working on other related projects.   Along the way, he had to start describing his ideas as potential houses of the future— because despite his popularity, he had failed to attract enough investment to mass-manufacture the actual house anytime soon.    But the model’s bizarre appearance, and the rhetoric that connected it with the fourth dimension, were nicely tied with this conception.   Connected with this futurism was the appealing idea that these new houses could enable social progress:  by making housing less expensive and delivering it efficiently, a huge proportion of humanity could be lifted out of poverty and provided practical homes of their own.   As Lorance puts it, “As time progressed the issue changed from the specific house to the possibilities the house represented”.    

In other words, Fuller’s tours promoting the Dymaxion House launched his reputation as a futurist, visionary thinker, and his popularity as a public speaker.   This gave him the freedom and success to later explore many other radical ideas, such as his geodesic domes, which led to well-deserved worldwide fame.   Ironically, we would probably not remember him nearly as well today if his first proposals had actually succeeded in attracting investors, and he had simply become the founder and CEO of a practical manufactured-home provider.

And this has been your math mutation for today.


References:

Saturday, May 27, 2017

230: Just Say The Dog Ate It

Audio Link

In the last few podcasts, you may recall that we’ve been discussing the Collatz Conjecture, a famous unsolved problem that’s very easy to state.   Just take any positive integer, and repeatedly perform the following operation:  if it’s odd, triple it and add 1, but if it’s even, divide it by two.   The conjecture is:  with this process, will you always end up eventually back at 1?   While the conjecture has been tested and is true for a huge range of values, no mathematician has yet been able to prove that it will always be true.   Based on that, I was surprised at a result I got when googling references on this conjecture for my last podcast.   It was an article by a Kentucky professor named Benjamin Braun, in which he talked about using the Collatz Conjecture as a homework problem for an undergraduate math class.

Now, when hearing this, you may be reminded of the strange story of statistician George Dantzig, which we have discussed in a previous podcast.   When in college, Dantzig arrived late in class one day, and copied down some problems he saw on the board, assuming they were the day’s homework.   He thought the homework was harder than usual, but finished it in a couple of days, and handed it in.   Then he discovered that these had actually been famous unsolved problems, and because he thought they were homework, he had solved them, earning his Ph.D. in one day!   This story became a staple of “power of positive thinking” lectures throughout the 20th century.

So, does Braun hope there is another Dantzig lurking out there in his classes, who could solve the Collatz Conjecture if approaching it with the right attitude?   That would be nice, but that’s a longshot.   Actually, Braun believes that unsolved problems can have a significant educational value as homework.   Too many students share a misperception that math is all about using known formulas and procedures to get answers, even after completing many college math classes, and are never in a position to explore a truly interesting problem.    When challenged with the Collatz Conjecture, students need to experiment with various hypotheses without knowing which ones will be true, and then can proceed to develop and prove some interesting insights.   For example, maybe some students will write a program to graph the number of steps it takes for various numbers to get down to 1.   This might lead them to hypothesize and prove an intermediate theorem, such as a relationship between a number’s base-2 logarithm and the minimum number of Collatz steps to 1.   In the end, they probably won’t solve the famous conjecture, but they will learn a lot along the way.

Braun describes three main benefits of this style of assignment:
  1. “Students are forced to depart from the answer-getting mentality of mathematics.”   As we have just discussed, they probably won’t completely solve the problem.
  2. “Students are forced to redefine success in learning as making sense and increasing depth of understanding.”    Since they are free from the pressure to find a solution, they can relax and concentrate on what they are learning.
  3. “Students can work in a context where failure is normal.”    As they examine the problem, students may come up with various hypotheses, and it’s fine if some of them are wrong.  As Braun describes it, they will understand the “pervasive normality of small mistakes in the day-to-day lives of mathematicians and scientists”.

Naturally, you might wonder how students react to being given an unsolvable problem.   Apparently this varies a bit:  Braun mentions that while some students love the exercise, others  are inclined to feel frustration and defeat.   But I think there is little doubt that this exercise is very unique in an undergraduate curriculum.  It’s a great path for students to understand that math isn’t all about replicating known algorithms, formulas, and proofs, and that there is still a huge unknown mathematical universe out there to explore.


And this has been your math mutation for today.


References:



Sunday, March 26, 2017

229: When Numbers Change

Audio Link

In the last episode we discussed the Collatz Conjecture, a simple yet unsolved problem in elementary number theory.   During the podcast, I mentioned that it had been checked for specific values up to 10 to the 60th power, and no counterexample has been found.   Now, for common day-to-day purposes, you might say we can take the conjecture as true, since we’re unlikely to deal in real life with any number that large, unless perhaps we are doing advanced work in physics or chemistry.    Most likely I won’t be filling my car with 10 to the 61st gallons of gasoline & needing to rely on its obscure mathematical properties.   But beyond that, you might wonder if it’s possible at all for the conjecture to be false— after all, why would regular numbers start behaving differently once we get above a certain threshold?  We need to be careful though.   In fact, there are various mathematical conjectures that have been shown to be true up to some immensely large bound, but then suddenly fail.   Today we’ll discuss a few examples.   

Actually, if you think about it a bit, it’s not too hard to construct artificial cases of a conjecture that’s true up to some large number, then fails afterwards.   Here’s an easy one:  “All numbers are less than 10 to the 60th power.”   I dare you, try any number less than 10 to the 60th, and you’ll see this theorem seems miraculously true!  But just try a single larger number, and it will fail.     OK, you may consider that one to be cheating, as of course it’s possible to do this by mentioning a specific limit.   But here’s another artificial case that doesn’t mention a specific limit:   For any number N, that number does not represent the ASCII-encoded text of a Math Mutation podcast.   You may recall that modern computers represent any text document as a long series of numbers, which could be mashed together to represent one large number.   If the shortest episode of this podcast has, say, 500 characters in it, with each character represented by a 8 binary digits, then the smallest counterexample is around 2 to the 4000th power.     Try any smaller number, and it will obey the theorem.   We should point out that even if the artificial nature of these two examples makes them unsatisfying, they are still valid as existence proofs, showing that it is possible for a theorem to prove true for a huge range of numbers and then fail.

The world of mathematics, however,  provides no shortage of “real” conjectures, some created by brilliant mathematicians who just happened to interpolate incorrectly on a handful of topics.   Probably the most famous is one by Pierre de Fermat, the creator of Fermat’s Last Theorem.   I’m sure you remember this Theorem, where in the 1600s, Fermat conjectured that there are no whole number solutions to a^n + b^n = c^n for n greater than two,   That one turned out to be true for all possible numbers, as proven by Andrew Wiles in the 1990s.   But Fermat also came up with many other conjectures.   (He actually had a somewhat pretentious habit of writing down his conjectures along with comments that he had a proof, but not writing down the proof; that’s probably a story for another podcast though.)   Anyway, Fermat examined a set of numbers of the form 2(2n) + 1, which came to be known as Fermat numbers.   The first five of these are 3, 5, 17, 257, and 65537, which he noticed are all prime.   So he hypothesized that all Fermat Numbers are prime.  It seemed like a pretty good guess, though due to the rapid exponential growth it was hard to check for too many values.   But within 70 years after Fermat’s death, Leonhard Euler found a factorization for the 5th Fermat number, which is up in the 4-billions range, showing that it was not prime.  This was actually a pretty impressive accomplishment in a pre-computer age.

Fermat got his posthumous revenge though.   One of the longest-standing serious conjectures that has ended up being disproved was an analog of Fermat’s Last Theorem that Euler proposed in 1769, Euler’s sum-of-powers conjecture.   This basically set up a family of equations related to Fermat’s famous theorem, which Euler thought would all be unsolvable with whole numbers.  One example is a5 + b5 + c5 + d5 = e5..    This actually stood for several centuries, until finally in 1966 a brute-force search with modern computer technology enabled L. J. Lander and T.R. Parkin to find a solution:  275 + 845 + 1105 + 1335 = 1445.      Those numbers may not seem that large, but remember that when combining five 3-digit numbers, you have a truly immense number of possibilities, 10 to the 15th power, again virtually unsearchable in a pre-computer era.

Anyway, in the show notes you can find links to various other cases where a conjecture was thought to be true, and held true for millions of examples or beyond— but then in the end, was discovered to be incorrect.     We shouldn’t feel bad about these:  that’s how all math and science advances, by trying to interpolate new truths about the universe from what has been discovered so far.  Sometimes we are right, and sometimes even the best of us are wrong.   So even though we’ve tested the Collatz Conjecture for a massive range of possible values, there still could be a hidden counterexample lurking somewhere in the far reaches of exponential values, waiting to catch us by surprise a few centuries down the road.

And this has been your math mutation for today.


References:



Monday, February 20, 2017

228: So Easy It's Hard

Audio Link

Let’s try an experiment.  Think of a positive whole number.   Any number will do.   Now, follow this simple rule:  if the number is even, divide it by two.   If it’s odd, multiply by 3 and add 1.   Repeat this process until your resulting number is 1.    So, for example, suppose we start with 5.   We multiply by 3 and add 1, to get 16.   Then, following the same rule, we divide by 2 to get 8.  Then we divide by 2 to get 4, and divide by 2 again to get 2, then 1.   If you try this with a few numbers, you’ll see that although you may go up and down a few times, you always seem to end up at 1.  But are you always guaranteed to arrive at 1, no matter what number you started with?   

Believe it or not, this simple question has not been solved.  It’s a famous open problem of mathematics, known as the Collatz Conjecture, or the “3n+1 problem”.     If we define the stopping time as the number of steps to get to 1, this conjecture can be stated as follows:  all positive whole numbers have a finite Collatz stopping time.   Despite being simple enough to explain to an elementary school student, this problem has defied the efforts of mathematicians and hobbyists for nearly a century.  The late quirky mathematician Paul Erdos once offered a $500 bounty for anyone who solves this problem, but this vast fortune has not yet been claimed.

By experimenting manually with a few numbers, you can easily convince yourself that the conjecture is true— it seems like you really do always end up back at 1, no matter how you started.   Yet your path to get there can vary wildly.   If you start with a power of 2, you can see that you’ll dive straight back to 1.   Some well-positioned odd numbers are almost as easy:  for example, if you start with 85, you’ll then jump to 256, which is a power of 2, and head straight back from there to 1.  On the other hand, if you start with the seemingly innocent number of 27, you will find the total stopping time is 111 steps, during with you visit numbers as high as 9232.  The Wikipedia page has some nice graphs showing how the stopping time varies:  its maximum value seems to slightly increase as the starting numbers increase, but there is no simple pattern that can be established to prove the conjecture.    Computers have experimentally shown that the conjecture holds for numbers up to 2^60, but of course that does not prove that it will remain true forever.

This Collatz stopping time function can also be seen as an example of chaos, a case where a very slight change in initial conditions can cause a dramatic difference in the result.   Why is it that starting with 26 will enable you to finish in a mere 10 steps, while increasing to 27 takes 111 steps, and then many higher numbers have far fewer steps?   It’s a good example to keep in mind when someone claims they have made accurate predictions about some iterative physical system using computer models.  Can they make the case that their model is somehow simpler than the Collatz process, of either halving or tripling and incrementing a single number at a time?   If not, what makes them think their modeling is less chaotic than the Collatz problem, or that their initial conditions are so accurate that they have ruled out chaos effects?     

As with many unsolved problems, this problem is also attractive to many slightly self-deluded amateurs, who every few years publish an article or make an online post claiming to have proven it.   One simple way to test any proposed proof is to see if it also applies to very similar problems for which the conjecture is false.  For example, instead of multiplying odd numbers by 3, multiply them by 5 before adding 1, turning the question into the “5n+1 problem”.   In that case, if we start with the number 13, the sequence is 13, 66, 33, 166, 83, 416, 208, 104, 52, 26, 13.   Since we got back to the number we started with, this means we repeat forever, without ever getting back to 1!   Thus, any attempted proof of the Collatz 3n+1 problem would have to also have some built-in reason for why it doesn’t apply to the 5n+1 version.    Why is the number 3 so special compared to 5?   Well, if I could answer that, I would be riding away in a limo paid for by Erdos’s $500.   

Still, even if you don’t expect to solve it, it is kind of fun to play with example values and look for patterns.   The way that this simple formula can seem to cause numbers to wander away from you, circle around and tease you temptingly, or race straight down to 1, can seem almost lifelike at times.   An intriguing online abstract claims to describe the problem as “an ecological process of competing organisms”, made of 1s in bit strings.  (Sadly, the full paper for that one is hidden behind a paywall, so I wasn’t able to read it.)     But I think my favorite summary of the problem is the one in the XKCD web comic:   “The Collatz Conjecture states that if you pick a number, and if it’s even divide it by two and if it’s odd multiply by three and add one, and you repeat this procedure long enough, eventually your friends will stop calling to see if you want to hang out.”

And this has been your math mutation for today.




References:




Sunday, January 29, 2017

227: Heads In The Clouds

Audio Link

A few days ago I was flipping channels on the TV, and saw a few minutes of one of the horrible movie adaptations of Jonathan Swift’s classic 1726 satirical novel, Gulliver’s Travels.  When you hear that title, you probably think of a man being tied up on the beach and being captured by an army of tiny Liiluputians.   Most adaptations of the novel focus on that nation of tiny people, which actually comprised only the first section of the Travels.   Although mainly a political satire, even that part of the book had an influence on modern mathematics and computer science—  the Lilliputians were fighting a war over which end of an egg to crack first, with the Big Endians vs the Little Endians.   We now use those terms when describing whether the highest or lowest byte comes first in each multi-byte ‘word’ that forms a computer memory.   But that’s not our main topic today.   What I want to talk about is one of Gulliver’s later voyages, which directly satirized the mathematics and science of Swift’s day:    the voyage to Laputa.

Laputa was a city built on a floating island, populated by a highly educated race of men who spent all day contemplating advanced ideas of music, mathematics and science, almost completely disconnected from any practical matters.   These were a group of people who literally had their “heads in the clouds”.   According to at least one website, this metaphor was already in use by the 1600s, so Swift may have had it in mind when designing this city.  In fact, Laputans are always so deep in thought that they must hire an assistant to alert them when they need to interact with the real world.   As Swift described it:  

It seems the minds of these people are so taken up with intense speculations, that they neither can speak, nor attend to the discourses of others, without being roused by some external taction upon the organs of speech and hearing; for which reason, those persons who are able to afford it always keep a flapper … in their family, as one of their domestics; nor ever walk abroad, or make visits, without him.  And the business of this officer is, when two, three, or more persons are in company, gently to strike … the mouth of him who is to speak, and the right ear of him or them to whom the speaker addresses himself.  This flapper is likewise employed diligently to attend his master in his walks, and upon occasion to give him a soft flap on his eyes; because he is always so wrapped up in cogitation, that he is in manifest danger of falling down every precipice, and bouncing his head against every post; and in the streets, of justling others, or being justled himself into the kennel.   

Somehow all this contemplation never translates in practice to real world usefulness.  Gulliver admires the elaborate care which their tailor takes to measure every detail of his body, but the clothes then delivered are ill-fitted.  Their houses are all poorly put together, with walls at odd angles, because their precise geometric instructions are too refined for the uneducated servants who end up having to do the actual building.    They serve their food cut carefully into various geometrical shapes or representing musical instruments, with no relevance towards whether that is an appropriate or useful presentation for actual consumption.   He summarizes the situation with “I have not seen a more clumsy, awkward, and unhandy people, nor so slow and perplexed in their conceptions upon all other subjects, except those of mathematics and music.” 

A question you might now be asking is:  why does Swift seem so hostile to advanced science and mathematics, which by our time have resulted in amazing improvements to human comfort, productivity, and lifespan?   We need to keep in mind that back in the early 1700s, it was not at all obvious that all the effort spent by the elites on pursuing advanced studies of mathematics and science were actually leading anywhere.   One of the few practical abilities the Laputans did have was the power to lower their floating city and crush rebellious townspeople on the ground, perhaps a hint at the worry that new science was too often used to develop instruments of war rather than advance humanity.  Here is one of Swift’s comments on the unfulfilled promises of scientific leaders:  “ All the fruits of the earth shall come to maturity at whatever season we think fit to choose, and increase a hundred fold more than they do at present; with innumerable other happy proposals.  The only inconvenience is, that none of these projects are yet brought to perfection; and in the mean time, the whole country lies miserably waste, the houses in ruins, and the people without food or clothes. “   In fact, these promises were largely fulfilled by the sciences in the 20th century— too bad Swift never lived to meet Norman Borlaug and see the massive agricultural productivity increases of his Green Revolution.

Swift also was particularly sensitive to the suspicious claims that scientific understanding of mathematical laws governing the natural world would somehow enable a corresponding scientific and mathematical reorganization of society to benefit mankind.   He’s actually pretty explicit about this, stepping back from the satire to address the real world directly at one point:  

But what I chiefly admired, and thought altogether unaccountable, was the strong disposition I observed in them towards news and politics, perpetually inquiring into public affairs, giving their judgments in matters of state, and passionately disputing every inch of a party opinion.  I have indeed observed the same disposition among most of the mathematicians I have known in Europe, although I could never discover the least analogy between the two sciences; unless those people suppose, that because the smallest circle has as many degrees as the largest, therefore the regulation and management of the world require no more abilities than the handling and turning of a globe; 

Here I think Swift was on to something, when we consider that the major mass-murdering totalitarian movements of the 20th century all had intellectuals at their core who believed they needed to scientifically re-engineer society.    On the other hand, since I’m actually an engineer who now serves in elected political office, I should probably stop the podcast at this point before getting myself into trouble.   

f you’re a fellow math geek and haven’t read Gulliver’s voyage to Laputa, I think you’ll really enjoy it.   Since it’s so old it’s out of copyright, you can follow a link in the show notes and read it for free at Project Gutenburg.
  
And this has been your math mutation for today.



References:




Thursday, December 29, 2016

226: See You Next Year

Audio Link

Before we start, I’d like to thank listener Maurizio Codogno, who published a nice review of the Math Mutation book at goodreads.com.   Bizarrely enough, he wrote his review in Italian, but thanks to the magic of Google Translate, that doesn’t stop other listeners from reading it!   Remember that if you like the podcast and/or book, I really do appreciate a nice review at iTunes, Amazon, or Goodreads.

Now, on to today’s topic.   Recently I’ve been reading a collection of essays, short biographical recollections, and text-based art experiments by the radical 20th-century composer John Cage, titled “A Year From Monday”.    You may recall Cage as someone I’ve mentioned in several podcasts,  as he composed (if you can call it that) the silent music piece ‘4 minutes 33 seconds”, plus numerous musical pieces generated based on complicated formulas involving random numbers.   As usual with Cage, his entries in this book are sprinkled with many instances of weirdness for the sake of weirdness, woven in with a bit of celebrity name-dropping.   But worth reading for the bizarre humor and occasional surprising insight.   

Cage also used the essays in the book as lyrics for one of his strangest music pieces, “Indeterminacy”.   In Indeterminacy, he read each short story at a speed designed to fill a constant interval, while randomly-determined music played in the background.   Due to the timing needs of the music, pieces would sometimes be read very quickly, to fit in a lot of words, or very slowly to fill the available time.   There also was a random ordering to the stories.   As Cage described it, “My intention in putting the stories together in an unplanned way was to suggest that all things – stories, incidental sounds from the environment, and, by extension, beings – are related, and that this complexity is more evident when it is not oversimplified by an idea of relationship in one person’s mind.”   I actually bought the 2-CD set a number of years back, but found listening to it a rather frustrating experience, as you can probably guess.   As often happens with Cage, the idea of the piece is a lot more fun than the actual end result.

Getting back to the book, one of the aspects that I find most amusing is the conundrum represented in its title, “A Year From Monday”.   Apparently Cage was having fun with a group of old friends, and they decided they wanted to get together again.  One of them suggested that they would all meet at a favorite spot in Mexico “a year from Monday”, and they all agreed to the proposal, without further clarification.   Cage liked the idea, as it appealed to, as he described it, “my interests in ambiguity and my interest in non-measurement”.  After leaving, however, Cage started wondering, when exactly did they agree to meet?

As you’ve probably already figured out, the phrase “a year from Monday” is rather ambiguous.   How do we define such an interval?   The easiest method would be to assume that they just meant the same date next year:  if we assume that the discussion occurred on Monday, June 2nd, for example, then they would meet next year on June 2nd.   But this will not be a Monday, since the number of days in a year is not divisible by 7— is this is a problem?   Normally, when we talk about an interval starting on a day of the week, we expect to meet on that same day again:  for example, when scheduling a monthly meeting in a tool like Microsoft Outlook, we usually select options like “the first Monday of every month” or “the second Monday of every month”, so perhaps it would be more reasonable to assume that the plan actually meant to meet on the first Monday of June next year.

There is also the question of how to handle the possible case of a leap year.   If the intervening February had an extra day, would they have to meet a day later than originally planned, more like a year from Tuesday?   On the other hand, maybe our slavish devotion to human-created calendars is part of the problem.   If an objectively-measured solar year was intended, this is about 365.25 days, so it might make more sense to meet 365 days later, but delay the meeting time by 6 hours in order to make the interval precisely one year.

While Cage claimed that he enjoyed the ambiguity, he also had a conflicting tendency to carefully plan and measure musical pieces like an engineer, as shown by the precise time intervals used in Indeterminacy.    Thus, eventually he realized that he had to figure out when exactly he was going to Mexico.   When he tried to confirm his plans with the other attendees, he soon realized that due to the ambiguous phrasing, most of them had not actually taken the rendezvous seriously.  In fact, some had make firm plans which would prevent them from meeting in Mexico anywhere near the proposed date.   With that, Cage decided to give up his attempts at planning, realizing that some problems don’t have a clear answer, and simply rely on Fate.  As he phrased it, “We don’t have to make plans to be together.  (Last July, Merce Cunningham and I ran into Bucky Fuller in the airport outside of Madrid.)  Circumstances do it for us.” 
   
And this has been your math mutation for today.



References: