## Tuesday, November 29, 2016

### 225: A Crazy Kind of Computer

Before we start, just a quick reminder:  the Math Mutation book would make a perfect holiday present for the math geeks in your life!   You can find it on Amazon, or follow the link at mathmutation.com .   And if you do have the book, we could use a few more good reviews on Amazon.   Thanks!   And now, on to today’s topic.

Recently I learned about a cool trick that can enable very rapid computation of seemingly difficult mathematical operations.  This method, known as stochastic computing, makes clever use of the laws of probability to dramatically cut down the amount of logic, essentially the number of transistors, needed to compute elementary functions.   In particular, a multiplication operation, which takes hundreds or thousands of transistors on a conventional computer, can be performed by a stochastic computer with a single AND gate.   Today we’ll look at how such an amazing simplification of modern computing tasks is possible.

First, let’s review the basic elements of standard computation, as realized in the typical design of a modern computer.   At a logical level, a computer is essentially built out of millions of instances of three simple gate types, each of which takes one or two single-bit inputs, which can be either 0 or 1.  These are known as AND, OR, and NOT gates.   An AND gate returns a 1 if both its inputs are 1, an OR gate returns a 1 if at least one of its inputs is 1, and a NOT gate transforms a single bit from 0 to 1 or vice versa.   An operation like multiplication would occur by representing your pair of numbers in binary, as a set of 0s and 1s, and running the bits through many AND, OR, and NOT gates, more or less replicating the kind of long multiplication you did in elementary school, with a few optimizations thrown in.    That’s why it takes so many gates to implement the multiplier in a conventional computer.

So, with this being said, how can we reduce the multiplication operation down to a single AND gate?   The key is that instead of representing our numbers in binary, we send a stream of bits down each input to the gate, such that at any moment the probability of that input being 1 is equal to one of the numbers we are multiplying.   So for example, if we want to multiply 0.8 times 0.4, we would send a stream where 1s appear with 80% probability the first input, and one with 40% 1s down the second input.   The output of the AND gate would be a stream whose proportion of 1s is 0.8 times 0.4, or 0.32.

The reason this works is due to a basic law of probability:   the probability of two independent events both happening is equal to the product of their probabilities.   For example, if I tell you there is an 80% (or 0.8) chance I will record a good podcast next year, and a 40% (or 0.4) chance I will record a bad podcast next year, the chance I will record both a good and bad podcast is 0.8 times 0.4, or 0.32.  Thus there is a 32% chance I will do both.    The fact that this ANDing of two probabilistic events is equivalent to multiplying the probabilities is the key to making a stochastic computer work.

Now if you’re listening carefully, you may have noticed a few holes in my description of this miraculous form of computation.   You may recall from elementary probability that there is one major limitation to this probability-multiplying trick:  the two probabilities must be *independent*.  So suppose you want your stochastic computer to find the square of 0.8.   Can you just connect your 80%-probability wire to both inputs of the AND gate, and expect an output result that is 0.8 time 0.8 (or 0.64) ones?   No— the output will actually just replicate the input value of 0.8, since at any given time, it will be 1 if and only if the input stream had a value of 1.   Think about my real-life example again:  if I tell you there’s an 80% chance I’ll record a good podcast next year, what’s the chance I’ll record a good podcast AND I’ll record a good podcast?   Stating it redundantly doesn’t change the probability, it’s still 80%.   To compute a square operation in a stochastic computer, I need to ensure I have two *independent* bit streams that each represent the number I’m squaring.   So I need two separately-generated streams, each with that 80% probability, and can’t take the shortcut of connecting the same input twice.   If performing a series of computations in a stochastic computer, a designer needs to be very careful to take correlations into account at each stage.

To look at another challenge of this computing style, let’s ask another basic question:  why doesn’t everyone implement their computers this way?  It’s not a question of technology— stochastic computing was first suggested in the 1950s, and such devices were actually constructed starting in the late 1960s.   Most importantly, generating independent bit streams with the probabilities equal to all the inputs of your computing problem, and then decoding the resulting output stream to find the probability of a 1 in your results, are both very complex computing tasks in themselves.   So I kind of cheated when I said the multiplication was done solely with the single AND gate.   Processing the inputs and outputs requires major designs with thousands (or more) of logic gates, quickly wiping out the benefit of the simplified multiplication.    This isn’t a total barrier to the method though.   The key is to find cases where we are able to do a lot of these probabilistic operations in relation to the number of input bit streams we need to create.    This can even be an advantage in some ways:  unlike a conventional computation, if a stochastic computer spends extra time performing the same computation and observing the output, it can increase the precision of the result.   But these issues do mean that finding good applications for the method can be rather tricky, as complex input and output generation must be balanced against the major simplification of certain operations.

Although stochastic computers were first constructed almost a half-century ago, they were then mostly abandoned for several decades.   With the breakneck pace of advances in conventional computing, there just wasn’t that much interest in exploring such exotic methods.    But with the recent slowdown of Moore’s Law, the traditional doubling of computer performance every 18 months or so, there has been a renewed interest in alternate computing models.   Certain specific applications, such as low-density parity check codes and image processing, are well-suited to the stochastic style of computing, and are current subjects of ongoing research.    I’m sure that in coming years, we’ll hear about more clever solutions to common problems that can be better performed by this unusual computing method.

And this has been your math mutation for today.

References:

## Monday, October 31, 2016

### 224: Did America Fail a Math Test?

Well, we’ve made it to another election season here in the US.   Among other things, it means that we’re once again hearing from politicians all over the spectrum about how they will fix American education, if we only vote for them.   We also start hearing all the stories that show how we are failing currently at education.   One of the more amusing ones that has been making the rounds again in the story of the ill-fated Third Pounder hamburger.   Supposedly a competing restaurant chain introduced a Third Pounder, which was designed to beat McDonald’s famous Quarter Pounder, but it failed in the marketplace.    The root cause of the failure was apparently that the average American didn’t understand that a third is greater than a quarter, and thought they were being ripped off.    This story has a bit of the ring of an urban legend to it though— don’t most people successfully cut up pizzas and follow kitchen recipes that require us to have at least this basic knowledge of fractions?   So I decided to do a little browsing on the web and see if I could get the real story.

Although it sounds suspicious, an article from Mother Jones, a generally well-researched periodical, seems to lend credibility to the legend.   The Third Pound burger was introduced in the 1980’s by fast food chain A&W, one of the many lesser competitors to the great McD,   It did indeed fail in the marketplace, and according to the Mother Jones article, they tracked down a statement by a former company owner, Alfred Taubman about what happened.   Here’s the quote they got:

Well, it turned out that customers preferred the taste of our fresh beef over traditional fast-food hockey pucks. Hands down, we had a better product. But there was a serious problem. More than half of the participants in the Yankelovich focus groups questioned the price of our burger. "Why," they asked, "should we pay the same amount for a third of a pound of meat as we do for a quarter-pound of meat at McDonald's? You're overcharging us." Honestly. People thought a third of a pound was less than a quarter of a pound. After all, three is less than four!

So, does this definitively prove that we are living in a nation of morons who think that 1/3 is less than 1/4?   Not so fast.   For one thing, this is not a summary of actual data at the time, but a recollection from years later.   We all know how those can be colored subconsciously by rumors and personal inclinations as time passes.    Like most prominent businessmen, Taubman wanted to believe that he did everything right, and it was the cruel universe that denied him his earned victory.   Perhaps one focus group participant made such a comment, and it stewed in his mind for years after.   Personally, I’ve always preferred McDonald’s burgers over A&W, regardless of what the A&W CEO thinks about their inherent superiority over “traditional fast-food hockey pucks.”

A thread at snopes.com brings up a few more interesting arguments.   Remember that the value provided is the pre-cooked weight of the burger.   Depending on the grinding process, ground beef can vary widely in content and quality.   Fat and water are lost during cooking, so the comparative post-cooking weights of the quarter and third pounders from different chains cannot be taken for granted.    Also, there may be other binding ingredients in the patty— so even if the weights are comparable, one may have more actual beef than the other.    And we can’t forget one other factor, the fact that it often seems more natural to deal in quarters than thirds; threes are an odd number, harder to subdivide and work with in many contexts.   So the term “quarter pounder” may just trigger more comfortable feelings when you read it on a menu, for reason that you don’t consciously consider.

So, is our nation really so ignorant of basic fractions that we reject 1/3-pound burgers for being smaller than quarter pounders?    I think the jury is still out.  McDonald’s has actually introduced several 1/3-pound specialty burgers in recent years, but it’s hard to separate their performance from the general 21st-century decline in our taste for fast food.   A site called adventuresinfrugal,com implicitly proposes an interestting experiment:  someone should introduce both third pounder and fifth pounder burgers at the same price, and see which sell better.    Perhaps that would finally tell us whether or not our nation is truly confused about basic fractions.

And this has been your math mutation for today.

References:

## Friday, September 30, 2016

### 223: Think With Both Your Brains

One of the most basic questions in mathematics is:  how do you solve problems in general?    This is traditionally why students tremble in terror at “story problems”— instead of being asked to mimic well-known algorithms, as in the majority of their school exercises, suddenly they are in a situation where they are not presented the clear path to the answer.    Yet problem solving is one of the most critical skills you can learn in mathematics classes, and many of us, especially those in science and engineering fields, spend a lifetime continuing to sharpen our skills in this area.    Even in non-math-based professions, people often encounter dilemmas where the solution is not obvious.   So I think it’s worth taking a look at ways to improve our problem solving abilities in general.   And surprisingly, modern neuroscience can provide us some strange methods to try when simple linear reasoning fails us.

Probably the most famous book on this topic is “How To Solve It” by the late Stanford math professor Georg Polya.   Polya lays out a general 4-step process for approaching any problem, in a book full of useful examples from basic areas of algebra and geometry.   First, you need to understand it:  what are the current information, the unknowns, the goals, and the restrictions that apply?  Second, find a way to connect the data and the unknowns, in order to plan your approach.   If this is not obvious, look for related problems, or a smaller subset of the problem that you can solve.   Third, carry out your plan, taking care to show that each step is correct.    And finally, examine the solution:   is there a way to independently check the result, or use it for other problems?

While Polya’s method is very useful, something about it seems a bit too simple.   After all, if it is easy to understand a problem, plan the solution, and carry it out, why are there so many unsolved problems out there?   Why hasn’t someone definitively solved each millennial problem,   like the P=NP question we discussed in podcast 13, and taken the million dollar prize?     I think one key is that a lot of problems require a flash of intuition, or a conceptual leap that is very difficult to arrive at by linear reasoning.   And that’s where the neuroscience comes in.   Recently I’ve been reading an intriguing book by Andy Hunt called “Pragmatic Thinking and Learning”, which offers a number of strategies for stimulating your mind to solve problems in different ways.

As you’ve probably heard somewhere, many modern scientists believe our brains exhibit two main modes of thought.   Commonly these are called “left brain” and “right brain”, but Hunt points out that the strict connection with the brain hemispheres isn’t quite right, so he suggests the terms “L-mode” and “R-mode”, with the L standing for “linear”, and R standing for “rich”.   You can think of the  two modes as being the two CPUs of a multiprocessing computer system, potentially working in parallel at all times.   Your L-mode brain excels at analytic, linear thinking, and is the primary user of methods like Polya’s.   Your R-mode brain is what you typically exercise in artistic or creative endeavors.    R-mode, while trickier to interact with due to its nonverbal nature, can also provide intuition, synthesis, and holistic thinking— it probably won’t come up with a mathematical proof, but can lead to do discover a conceptual leap you need to get past a roadblock in one.    But how can we effectively interact with our R-mode, or stimulate its activity, in order to leverage its power?   Hunt suggests a variety of basic techniques for getting a dormant R-mode active and more involved.

One simple method is to try to use different senses than usual, in a way that engages your artistic side.  While thinking about a problem with your L-mode, do some minor creative action with your hands that exercises your R-mode, such as making shapes with a paper clip, doodling, or putting together Legos.   In one amusing example, Hunt describes a case where a team designing a complex computer program decided to get up and “role-play” each of the functional units, and soon had a variety of new insights about the system.

Another method Hunt suggests comes from the domain of computer science, but is likely applicable to many other fields:  “Pair Programming”.   The idea here is that one programmer is actually typing a computer program on the screen, inherently an L-mode activity, while the other is sitting next to him, observing, and making suggestions.   Because the second programmer doesn’t have to worry about the L-mode task of entering the precise sequence of commands, he is free to use his R-mode to take a holistic look, and come up with intuitive suggestions about the overall method.

A third method that can be surprisingly effective is known as “image streaming”.   After thinking about a problem for a while, try to close your eyes and visualize images related to it for ten minutes or so.   For each image you can think of, first try to imagine it visually, then describe out loud how it appears to all five of your senses.   This one sounds a bit silly at first— and I would suggest you don’t try it in an open cubicle with your co-workers watching— but can be a very powerful way to engage your R-mode.

A fourth suggestion is called the “morning pages” technique:  when you wake up every morning, immediately write at least three pages on whatever topic comes to mind.   Don’t censor what you write, or try to revise and make it perfect, just let the information flow.   Because it’s the first thing in the morning, you’re getting an unguarded brain dump, while your R-mode dreams and unconscious thoughts are still fresh in your mind.    If you were working on a hard problem the day before, your R-mode may naturally have provided new insights during the night that you now want to capture.    As Hunt summarizes, “You haven’t yet raised all the defenses and adapted to the limited world of reality”.

These ideas are just a small subset of known techniques for leveraging your lesser-used R-mode— if you want to maximize your ability to use your whole mind for problem solving, I would highly recommend that you check out his book, linked in the show notes.    I’ll be interested to hear from any of you who successfully use some of Hunt’s odder-sounding techniques to solve difficult problems.    On the other hand, if you think everything I’ve said today sounds crazy, that’s probably just your L-mode brain over-exercising its linear, logical influence.

And this has been your math mutation for today.

References:

## Monday, August 22, 2016

### 222: Fractal Expressionism

If you watch enough TV, you probably remember an old sitcom plot where the characters are at a viewing of abstract expressionist art, and somehow a 3-year-old’s paint scribblings get mixed in with the famous works.   Most of the characters, clueless about art, pretend to like the ‘bad’ painting as much as the real paintings, trusting that whatever is on display must be officially blessed as good by the important people.    However, one wise art aficionado spots the fake, pointing out how it is obviously garbage compared to all the real art in the room.   Therefore the many pseudo-intellectuals in the audience get affirmation that their professed fandom of “officially” respected art has a valid basis.     I’ve always considered this kind of plot a mere fantasy, until I read about physicist Richard Taylor’s apparent success in showing that Jackson Pollock’s most famous paintings actually involve mathematical objects called fractals, and this analysis can be used to distinguish Pollock artworks from lesser efforts.

Before we talk about Taylor’s work, let’s review the idea of fractals, which we have discussed in some earlier podcasts.   A simple definition of a fractal is a structure with a pattern that exhibits infinite self-similarity.     A popular example is the Koch snowflake.   You can create this shape by drawing an equilateral triangle, then drawing a smaller equilateral triangle in the middle third of each side, and repeating the process on each outer edge of the resulting figure.   You will end up with a kind of snowflake shape, with the fun property that if you zoom in on any local region, it will look like a partial copy of the same snowflake shape.    Other fractals may have a random or varying element in the self-symmetry, which makes them useful to create realistic-looking mountain ranges or coastlines.    The degree of self-similarity in a fractal is measured by something called the “fractal dimension”.

Taylor’s insight was that Pollock’s paintings might actually be representing fractal patterns.   This idea has some intuitive appeal:  perhaps the abstract expressionists were a form of savant, creating deep mathematical structures that most people could understand on an intuitive level but not verbalize.  Taylor created a computer program that would overlay a grid on a painting and look for repeating patterns, reporting the fractal dimension resulting from the analysis.   After examining a large sample of these, his research team announced that Pollack’s paintings really are fractals, tending to almost always fall within a particular range of fractal dimensions.   They also claimed that these patterns could be used with high accuracy to distinguish Pollock paintings from forgeries.   Taylor even claimed at one point that, due to the various changes in technique over Pollock’s career, he could date any Pollock painting to within a year based on its fractal dimension.   Abstract art critics and fans all over the world felt vindicated, and Taylor became the toast of the artistic community.

However, the story doesn’t end there.   When first reading about Taylor’s work, something seemed a little fishy to me— it reminded me a bit of the overblown fandom of the Golden Ratio, which we discussed back in episode 185.   You may recall that in that episode, I pointed out that any ratio in nature roughly close to 3:5 could be interpreted as an example of the Golden Ratio, by carefully choosing your points of measurement and level of accuracy.   Similarly, it seems to me that the level of fine-tuning required for Taylor’s type of computer analysis would make it inherently suspect.   Taylor isn’t claiming an indisputable point-by-point self-similarity, as in an image of the Koch snowflake.   He must be inferring some kind of approximate self-similarity, with a level of approximation and tuning that is built into the computer programs he uses.  Furthermore, there is no way his experimental process can be truly double-blind:  all Pollock paintings are matters of public record, and I suspect everyone involved in his study were Pollock fans to some degree to begin with.   I’m sure most Pollock paintings exhibit some kinds of patterns, and with the right definitions and approximations, just about any kind of pattern can be loosely interpreted as a fractal.   With all this knowledge available as they were creating their program, I’m sure Taylor’s team was able to generate something finely tuned to Pollock’s style, even if they were not conscious of this built-in bias.

Of course, many people in the math and physics world also found Taylor’s analysis suspicious.   A team of skeptics, led by well-known physicist Laurence Krauss, developed their own fractal-detecting program and tried to repeat Taylor’s analysis.   They found that attempting to analyze their fractal dimensions was useless for identifying Pollock paintings.   Several actual paintings were missed, while lame sketches of random patterns by lab staff, such as a series of stars on a sheet of paper that could have been written by a child, were given Pollock-like measurements.   When issuing their report, this team claimed to have conclusively proven that fractal analysis is completely useless for distinguishing Pollocks from forgeries.    In some sense, this may not be such a bad result for art fans— as Krauss’ collaborator Katherine Jones-Smith stated, "I think it is more appealing that Pollock's work cannot be reduced to a set of numbers with a certain mean and certain standard deviation.”

So, are Pollock paintings actually describable as fractals, or not?   The jury still seems to be out.   Krauss’s team claimed that they had definitively disproven this idea.   However, Taylor responded that this was merely an issue of them having used a much less sophisticated computer program.   Active research is still continuing in this area, as shown by a 2015 paper that combines Taylor’s method with several other mathematical techniques, and claims a 93% accuracy in identifying Pollocks.   My inclination is that we should still look at this entire area with a healthy skepticism, due to the inability to produce a truly double-blind study when famous artworks are involved.    But there are likely some underlying patterns in abstract expressionist art, at least in the better paintings, which may be a key to why some people find them enjoyable.   So lie back, turn on your John Cage music, and start staring at those Pollocks.

And this has been your math mutation for today.

References:

## Thursday, June 30, 2016

### 221: The Trouble With Glass Beads

Hi everyone— before we start, just wanted to remind you again that the Math Mutation book is out!   To order, you can follow the link at mathmutation.com or just search for it on Amazon.    Please consider posting a review on Amazon if you like it; I’m still waiting for somebody to post one.  And if you ever pass through the Portland, Oregon metro area, I’ll be happy to autograph your copy!   Now on to today’s topic.

I just finished reading the controversial 2006 bestseller “The Trouble With Physics”, by physicist Lee Smolin.   Smolin is an accomplished physicist who has issued a blistering critique of his own field, claiming that ever since the Standard Model of particle physics was developed in the 1970s, the entire field has simply failed to make significant progress in understanding the fundamental laws of the universe.    He largely blames the widespread focus on string theory, the attempt to unify physics that is based on interpreting subatomic particles as tiny multidimensional strings.   In a 2015 followup interview linked in the show notes, he described most of the problems he pointed out as still being highly relevant.    As I was reading Smolin’s book, I couldn’t help but be reminded of a classic novel I read years ago,  Herman Hesse’s “The Glass Bead Game”.

“The Glass Bead Game” describes a world in which its wisest class of professors inhabit a unique, isolated institution, where they spend all their time studying an intricate game that involves moving glass beads around a board.   They have decided that they do not need to directly study most concrete real-world subjects, because all knowledge in the world can theoretically be captured through a mathematical mapping to patterns of glass beads.     This isn’t quite as absurd an idea as it may have seemed in Hesse’s day:  you may recall several earlier podcasts where I mentioned Conway’s Game of Life.   This game is played on an infinite two-dimensional board full of cells, which can be either on or off— you can imagine this being marked with glass beads— and a simple set of rules determines which adjacent cells will be on or off during the next time unit.   Amazingly, this simple game has been proven to be computationally universal, which means that any modern computer can be simulated by some pattern of beads.    Of course, it would be much more efficient to build the computer directly, rather than dealing with the bead-based simulation, which is where Hesse’s concept breaks down.

Anyway, Hesse won a Nobel Prize for the novel, and it can be interpreted on many levels; I’m sure his fans will beat me up for my grossly oversimplified summary.   But one basic interpretation of the Game is as a critique of the academia of Hesse’s time:  focusing on their own abstract research, Europe’s class of professors were far too detached from the real world, to the point of losing all relevance to the day-to-day lives of the people around them.   Even though all of life could be represented by the glass beads, the converse was not true:  many manipulations of the beads would tell you absolutely nothing useful or interesting about life.  Yet the professors spent all their times studying the beads while ignoring the world around them.  You would have thought that professors of physics would be the ones least likely to be vulnerable to this kind of critique, however, since their field forces them to constantly keep in touch with physical reality.

But this is where Smolin’s thesis comes in.   String theory originally became popular when it was seen as a promising way to potentially unite relativity and quantum mechanics, but that first burst of interest was decades ago.   Since then, thousands of physicists have graduated with Ph.D.s and spent their career working out various details and implications of string theory, but the theory is far from complete.   In fact, the mathematics is so complex that a complete theory seems to be beyond reach, and due to various parameters that cannot be independently derived, it’s probably more correct to describe string theory as a huge family of theories rather than as one unifying theory.   In some of our other podcasts we’ve touched on bizarre but fun ideas implied by these theories, such as the universe having 11-dimensions, and our existence actually being on the surface of a multidimensional membrane.

Yet despite all this activity, Smolin points out that string theory has never come close to experimental verification.   All the academic work in this area has essentially amounted to glass bead games, working out complex mathematical relationships with no clear relevance to reality through experimental verification.    This is a major contrast with past revolutions in physics:  even though some results of early 20th-century physics seemed bizarre, they ultimately were subject to experimental verification.    Einstein’s theory of relativity, for example, predicted astronomical phenomena such as the curvature of light and modifications to the classic Doppler effect.  As a result, observations such as the 1938 Ives-Stilwell experiment were able to provide solid confirming evidence, and relativistic effects are now in daily use by our GPS systems.   Nobody has yet come up with an observable effect that we can use to test string theory, at least with any technology that currently seems within humanity’s reach.

So, according to Smolin, what has caused academic physics to waste such a massive amount of time and resources on a field that seems mainly to be a mathematical game?    As Smolin summarized in a 2015 followup interview, “The problems are rooted in the way the career and funding structures of the academy reward me-too science, lack of courage, entrenchment of failed research programs, legacy building, empire building, narrowness, defensive strategies and groupthink. “    These issues are largely a consequence of the modern sociology of academia.   Starting in the mid 20th century, the idea of being a professor transitioned from something rare and unique to a standardized and regimented profession.   The influence of the senior professors grew, and there was increasing pressure on new entrants to conform to their existing theories, while at the same time needing to “publish or perish”, quickly publish multiple articles to prove their worth.

Related to this thesis, Smolin divides academics into two classes:  “technicians” and “seers”.  The current type of organization favors technicians, smart but compliant Ph.D.s who can cleverly extend existing theories, over seers, brilliant minds with truly original concepts.   While technicians are necessary to complete the work of science, it is seers who lead the way and discover or invent new paradigms.   The classic prototype of a seer is Albert Einstein, who initially could not get an academic job, but explored revolutionary ideas while working as a patent clerk. Smolin points out that to create revolutionary new theories, seers often have to spend several years working out the basic concepts before they can generate publishable results, and this tends to prevent their academic success.   He mentions numerous modern seers he has identified, most of whom have had to develop their ideas apart from standard academic environments.    Smolin argues that to restore progress in physics, academia needs to find a way to encourage and reward seers as well as technicians.

I don’t know enough about modern physics to accurately critique Smolin’s comments on string theory, but having spent some time in grad school in another field, I can easily believe his points about technicians and seers.   I think the leaders of every academic field need to look closely at Smolin’s critique and their tendency towards conformity and subservience.   They need to make sure they are providing a way for truly original thinkers to make fundamental changes to their fields when needed, not simply retreating from the real world into a series of comfortable and well-defined glass bead games.

And this has been your math mutation for today.

References:

## Saturday, May 28, 2016

### 220: Cognitive BSes

Hi everyone— before we start, just wanted to remind you, the Math Mutation book is out!   To order, you can follow the link at mathmutation.com or just search for it on Amazon.    And if you ever pass through the Portland, Oregon metro area, I’ll be happy to autograph your copy.   If you like it, posting a positive review on Amazon would be really helpful.  Now on to today’s topic.

As you may recall, one of the topics we covered in some previous podcasts, and in the Math Mutation book, is the idea of “Cognitive Biases”.   These are well-known ways in which the human brain has a natural instinct to think in ways that violate basic laws of logic and mathematics.   One classic example is the Anchoring bias:  if asked a question that has a quantitative answer, you will tend to give an estimate close to numbers you recently heard.   For example, suppose I arrange separate discussions with two people to estimate how many listeners Math Mutation has.   With the first one, I start by asking “Does Math Mutation have more than 100 listeners, or fewer than 100?”.   But with the second one, I open with “Does Math Mutation have more than 1 million listeners, or fewer than 1 million?”   If I then ask both of them to estimate the total number of listeners, the first will probably come up with a much smaller estimate than the second, even though neither has any objective information to justify a particular number.

After reading the chapter in my book, my old Princeton classmate Tim Chow pointed out that calling this a “Cognitive Bias” might not be justified.    Sure, the listener technically has no information to support the larger number in the second case— but in cases where we are talking to another human being, we trust them to provide relevant information.   This includes both direct statements of facts, and implications that might not be directly stated.   If I ask you whether Math Mutation has more or fewer than 100 listeners, I am implicitly communicating the information that the 100 number is pretty close, even though I have not rigorously declared this to be a relevant fact.   So if this number isn’t close, I have essentially misled you with false information— the fact that you trusted me and used the wrong number is my fault, not some flaw in your mental logic.    Thus, this “Cognitive Bias” is really a social manipulation.

Now, if you’re familiar with the literature on this topic, you might point out an interesting experiment that seems to refute this.   In this experiment, subjects saw a roulette wheel spin, then were asked the percentage of the United Nations countries that were in Africa.   Even though there is no logical reason for them to suspect the roulette wheel had advanced knowledge of geopolitics, their answers were still biased towards the results they saw on the wheel.   Many similar experiments have been carried out.   Doesn’t this provide irrefutable proof that this really is a cognitive bias?

Not so fast.   This is a very artificial situation.   Maybe when asked to guess a number about which they have absolutely no idea, they just grab any arbitrary number they can think of, which will tend to be one they saw recently.   They aren’t following some flawed cognitive process, they just don’t have any reason to pick any particular number.   Again, this doesn’t really indicate a mathematical flaw in their reasoning— they don’t think the number they picked has a particular logical justification.   Not knowing an answer, they just defaulted to what was at the top of their head.

Most of the other well-established Cognitive Biases are open to similar criticisms.  Another example is the Conjunction Fallacy:   suppose I tell you that Joe is a Princeton mathematics graduate and chess champion, and then ask you to choose the more likely of two statements.  1.  “Joe is now a physics professor.”  2. “Joe is now a physics professor and head of the local Math Mutation fan club.”    You will likely choose option #2, since it seems like this kind of guy should be a Math Mutation fan.   But on reflection, option 2 must be strictly less likely than option 1, as it takes the same basic fact and adds an additional, more restrictive, condition.  But again, there is information being communicated between the lines:  if I give you those two choices, you probably interpret #1 as implicitly stating that Joe is NOT president of the Math Mutation fan club.   I didn’t say that, but the additional choice in the second option made this a very reasonable inference.   Once again, it can be seen as more of a social manipulation, where I leveraged typical communication conventions to imply something without actually stating it, and the implication is not strictly justified by mathematical logic.

We should point out, though, that even if the so-called “Cognitive Biases” are not truly flaws in the logic of the human brain, they are still important psychological effects to be aware of for many reasons.   For example, let’s take a look at some practical applications of the Anchoring bias.  It’s well known that when negotiating prices in business, your opening offer can set an anchor that affects the entire discussion.   Negotiating business contacts usually have some level of trust in each other, and taking advantage of this to establish a good anchor is a smart, though slightly manipulative, technique.    On another note, suppose you’re a surveyor trying to get accurate estimates in a survey or questioning experts on a difficult topic.   You need to be careful not to include some kind of number in the question that might unintentionally influence the result.   So knowing about Anchoring is still very useful, whether you call it a true cognitive bias or a simple persuasion technique.    In general, I still believe the Cognitive Biases are worth studying and raising awareness of, though maybe more as social or linguistic phenomena than as true flaws in the human mind.

And this has been your math mutation for today.

References:

## Sunday, May 1, 2016

### 219: A Portal to the Past

I was sad to hear of the recent passing of Umberto Eco, one of my favorite contemporary novelists.   Due to his long career as a professor of “semiotics”, his novels drew on a wide variety of historical, mathematical, and scientific ideas from throughout the past millennium.    One that I read relatively recently was “The Island of the Day Before”, his 1994 story of a soldier marooned during the 1600s, the historical period during which the governments of Europe were desperately searching for a reliable way to measure longitude.   You may recall our discussion of this multi-century quest back in episode 108.    The key plot twist is that the main character, an Italian soldier named Roberto Della Griva, gets marooned on a ship that is trapped within sight of the International Dateline, and an island which sits beyond it.

Della Griva attaches a mystical significance to this line, as is implied by the book’s title.   He somehow convinces himself that if he could just cross it, it would mean he was traveling back in time.   If he will manage to swim across the line tomorrow, would it mean that today he should see himself swimming in the distance?  Here is one amusing passage from the book:

“Indeed, as he sees it distant not only in space but also (backwards) in time, from this moment on, whenever he mentions that distance, Roberto seems to confuse space and time, and he writes, "The bay, alas, is too yesterday," and ,"How much sea separates me from the day barely ended," and even, "Threatening rainclouds are coming from the Island, whereas today it is already clear . . . .”

While these speculations are rather absurd, this discussion got me curious about the actual history of the International Dateline.   It has been recognized since ancient times that the time of day is slightly different as you travel to the east or west, but the idea that you might travel all the way around the world and gain or lose a day was only really conceivable in relatively recent eras of human history.    The real history of the Dateline can probably be properly said to have started with Magellan’s circumnavigation of the globe.   When the handful of survivors of that three-year voyage arrived home in 1522, they were surprised to discover that despite careful logging of their travels, they had lost a day.     Here is a description from one of them:

On Wednesday, the ninth of July [1522], we arrived at one these islands named Santiago, where we immediately sent the boat ashore to obtain provisions. [...] And we charged our men in the boat that, when they were ashore, they should ask what day it was. They were answered that to the Portuguese it was Thursday, at which they were much amazed, for to us it was Wednesday, and we knew not how we had fallen into error. For every day I, being always in health, had written down each day without any intermission. But, as we were told since, there had been no mistake, for we had always made our voyage westward and had returned to the same place of departure as the sun, wherefore the long voyage had brought the gain of twenty-four hours, as is clearly seen.

As you can see, even though they were caught by surprise at first, the sailors were able to quickly realize their fallacy.   Because they were traveling in the same direction as the sun, they had experienced one less day, but each day they had experienced was slightly longer.   So there was no actual time travel, just an accounting error.    A similar phenomenon was observed later by other circumnavigators, such as English explorer Francis Drake.    This incident also inspired the famous surprise ending of Jules Verne’s “Around the World in 80 Days”.   Even though the reasons for the gain or loss of a day upon circumnavigation were well known, I suppose it is vaguely possible than an uneducated sailor like Eco’s character could have attached a more mystical significance to the effect.

But even these odd experiences of sailors were not really common enough to motivate standardization of time zones and an international dateline, until the era of trains came along in the 19th century.   Suddenly it was possible to move quickly and continuously between areas with different local times.    Finally in October 1884, representatives from 25 nations met at an international conference in Washington, DC, and came up with the system of time zones we know today, based on longitudinal lines starting from the Greenwich meridian, and times derived by adding or subtracting from the Greenwich Mean Time, or GMT.   To increase the chances of universal adoption, it was agreed that local islands and nations can move the timelines for convenience, which is why we see those squiggly time zone boundaries today instead of simple longitudinal lines.     Amusingly, the French seemed to be insulted by the idea of a location in England defining the time zones: until 1911, instead of referring to Greenwich Mean Time, they referred to Paris Mean Time minus nine minutes and 21 seconds, which was equivalent.

Unfortunately, since these meridians and zones are all artificial labels, I’m afraid Della Griva’s dream of somehow using them for time travel would never quite pan out.   Some modern writers ponder this idea as well, but I think they are mostly tongue-in-cheek.   For example, travel author Bill Bryson has written:

“I left Los Angeles on January 3 and arrived in Sydney fourteen hours later on January 5. For me there was no January 4. None at all. Where it went exactly I couldn’t tell you. All I know is that for one twenty-four-hour period in the history of earth, it appears I had no being.   I find it a little uncanny, to say the least. I mean to say, if you were browsing through your ticket folder and you saw a notice that said, ‘Passengers are advised that on some crossings twenty-four-hour loss of existence may occur’…, you would probably get up and make inquiries, grab a sleeve, and say, ‘Excuse me.’”

But somehow I don’t think Bryson really believes he lost a day of his life due to the shifting of a few time labels.    I would be more concerned about the portion of my existence that is wasted while crammed into a tiny airline seat for half a day.

In a more serious vein, there was a solar eclipse last month that started on March 9 and ended on March 8, but it was actually traveling forward in time the whole way, although it achieved its peculiar timeline by crossing the International Dateline as it travelled.   Also, you shouldn’t completely lose hope— one form of time travel across the dateline is possible.   Remember that under the theory of relativity, if you travel by airplane at a high speed, you really do lose a tiny fraction of a second, as time slows down for you in relation to your friends on the ground.   However, that works across any line, not just one artificial one.

And this has been your math mutation for today.

References: