Sunday, February 18, 2018

238: Programming Your Donkey

Audio Link

You have probably heard some form of the famous philosophical conundrum known as Buridan’s Ass.   While the popular name comes from a 14th century philosopher, it actually goes back as far as Aristotle.   One popular form of the paradox goes like this:   Suppose there is a donkey that wants to eat some food.   There are equally spaced and identical apples visible ahead to its left and right.   Since they are precisely equivalent  in both distance and quality, the donkey has no rational reason to turn towards one and not the other, so it will remain in the middle and starve to death.   

It seems that medieval philosophers spent quite a bit of time debating whether this paradox is evidence of free will.   After all, without the tie-breaking power of a living mind, how could the animal make a decision one way or the other?   Even if the donkey is allowed to make a random choice, the argument goes, it must use its living intuition to decide to make such a choice, since there is no rational way to choose one alternative over the other.  

You can probably think of several flaws in this argument, if you stop and think about it for a while.   Aristotle didn’t really think it posed a real conundrum when he mentioned it— he was making fun of sophist arguments that the Earth must be stationary because it is round and has equal forces operating on it in every direction.   Ironically, the case of balanced forces is one of the rare situations where the donkey analogy might be kind of useful:  in Newtonian physics, it is indeed the case that if forces are equal in every direction an object will stay still.    But medieval philosophers seem to have taken it more seriously, as a dilemma that might force us to accept some form of free will or intuition.  

I think my biggest problem with the whole idea of Buridan’s Ass as a philosophical conundrum is that it rests on a horribly restrictive concept of what is allowed in an algorithm.  By an algorithm, I mean a precise mathematical specification of a procedure to solve a problem.   There seems to be an implicit assumption in the so-called paradox that in any decision algorithm, if multiple choices are judged to be equally valid, the procedure must grind to a halt and wait for some form of biological intelligence to tell it what to do next.   But that’s totally wrong— anyone who has programmed modern computers knows that we have lots of flexibility in what we can specify.   Thus any conclusion about free will or intuition, from this paradox at least, is completely unjustified.   Perhaps philosophers in an age of primitive mathematics, centuries before computers were even conceived, can be forgiven for this oversight.

To make this clearer, let’s imagine that the donkey is robotic, and think about how we might program it.   For example, maybe the donkey is programmed to, whenever two decisions about movement are judged equal, simply choose the one on the right.   Alternatively, randomized algorithms, where an action is taken based on a random number, essentially flipping a virtual coin, are also perfectly fine in modern computing.    So another alternative is just to have the donkey choose a random number to break any ties in its decision process.    The important thing to realize here is that these are both basic, easily specifiable methods fully within the capabilities of any computers created over the past half century, not requiring any sort of free will.  They are fully rational and deterministic algorithms, but are far simpler than any human-like intelligence.   These procedures could certainly have evolved within the minds of any advanced  animal.

Famous computer scientist Leslie Lamport has an interesting take on this paradox, but I think he makes a similar mistake to the medieval philosophers, artificially restricting the possible algorithms allowed in our donkey’s programming.   For this model, assume the apples and donkey are on a number line, with one apple at position 0 and one at position 1, and the donkey in an arbitrary starting position s.   Let’s define a function F that describes the donkey’s position an hour from now, in terms of s.  F(0) is 0, since if he starts right at apple 0, there’s no reason to move.   Similarly, F(1) is 1.  Now, Lamport adds a premise:  the function the donkey uses to decide his final location must be continuous, corresponding to how he thinks naturally evolved algorithms should operate.   It’s well understood that if you have a continuous function where F(0) is 0, and F(1) is 1, then for any value v between them, there must be a point x where F(x) is v.   So, in other words, there must be points v where F(v) is not 0 or 1, indicating a way for the donkey to still be stuck between 0 and 1 and hour from now.      Since the choice of one hour was arbitrary, a similar argument works for any amount of time, and we are guaranteed to be infinitely stuck from certain starting points.   It’s an interesting take, and perhaps I’m not doing Lamport justice, but it seems to me that this is just a consequence of the unfair restriction that the function must be continuous.   I would expect precisely the opposite:   the function should have a discontinuous jump from 0 to 1 at the midpoint, with the value there determined by one of the donkey-programming methods I discussed before.

I did find one article online that described a scenario where this paradox might provide some food for thought though.   Think about a medical doctor, who is attempting to diagnose a patient based on a huge list of weighted factors, and is at a point where two different diseases are equally likely by all possible measurements.   Maybe the patient has virus 1, and maybe he has virus 2— but the medicines that would cure each one are fatal to those without that infection.   How can he make a decision on how to treat the patient?   I don’t think a patient would be too happy with either of the methods we suggested for the robot donkey:  arbitrarily biasing towards one decision, or flipping a coin.     On the other hand, we don’t know what goes on behind the closed doors after doctors leave the examining room to confer.   Based on TV, we might think they are always carrying on office romances, confronting racism, and consulting autistic colleagues, but maybe they are using some of our suggested algorithms as well.     In any case, if we assume the patient is guaranteed to die if untreated, is there really a better option?  In practice, doctors resolve such dilemmas by continually developing more and better tests, so the chance of truly being stuck becomes negligible.   But I’m glad I’m not in that line of work. 



And this has been your math mutation for today.

References: