Tuesday, June 15, 2021

270: Which Way To Turn

 Audio Link


Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school.   Recording from our new headquarters in Wichita, Kansas, this is Erik Seligman, your host.


Apologies for the long gap since the last episode— as you just heard, we have been relocating Math Mutation headquarters across the country.   You can probably guess that this involves lots of details to handle.   We should soon be back on our almost-regular monthly schedule.


Anyway, with all the moving stuff going on, I recalled a topic I had been considering a while back.   It’s a pretty simple geometry question, yet one with a major (if subtle) effect on all our lives:  why are screws generally right-handed?    If you’ve ever had to screw something together, you probably remember the saying “lefty loosey righty tighty’, which reflects the common design of these basic and universal tools.   Screws are typically considered one of the basic six “simple machines”, along with inclined planes, levers, pulleys, wedges, and wheels.   So why are they always designed to rotate in one direction?


With a quick web search, it seems almost unanimous that this convention simply derives from the typical handedness of humans:  if you’re right-handed, then a right-handed screw is easiest to screw in.    It’s just as easy to manufacture screws in either direction, but when they first became standardized (with the Whitworth design in 1841), a single design became ubiquitious.   I guess my poor left-handed daughter will forever be a victim of this society-wide conspiracy.   


But the more surprising fact I discovered when researching this topic is that left-handed screw threads do exist, and are used for a variety of specialized applications.    Perhaps the most obvious is for situations where the natural rotation of an object would tend to loosen a right-handed screw:  for example, the left-side pedals on a bicycle, certain lug nuts on the left side of cars, or connections that secure other machine parts that are rotating the wrong way.      There are also cases where it’s useful to couple a left-handed and a right-handed connection, in a pipe fitting for example, so rotation in a single direction helps connect at both ends.    A less obvious usage is for safety:   in many applications involving flammable gas lines, the connections for the gas line use left-handed rather than right-handed threads, so nobody connects the wrong pipe by accident.   


On the other hand, a few of the uses described just seemed kind of silly, though I suppose they were for valid reasons.    In the early 20th century, many lamps used in subways were specially designed to have bulbs that screw in with left-handed threads, so nobody could steal them and use them at home.   It’s probably a sign of our society’s growing prosperity over the last century that most people don’t steal public light bulbs anymore.    More bafflingly, I found a few references to the use of left-hand threads in early ballpoint pens, to provide a “secret method” of disassembly.   I guess business meetings were a lot less boring back then— given the amount of idle fiddling I’ve typically done with my pens on a normal day at work, I can’t imagine that secret lasting very long.   


And this has been your math mutation for today.  




References:  

Sunday, April 25, 2021

269: A Good Use of Downtime

 Audio Link

The recent observation of Holocaust Remembrance Day reminded me of one of the stranger stories to come out of World War II.    John Kerrich was a South African mathematician who made the ill-fated decision to visit some family members in Denmark in 1940 just before the Germans invaded, and soon found himself interned in a prison camp.    We should point out that he was one of the luckier ones, as the Germans allowed the Danes to run their prison camps locally, and thus lived in extremely humane conditions compared to the majority of prisoners in German-occupied territories,   But being imprisoned still left him with many hours of time to fill over the course of the war.     Kerrich decided to fill this time by doing some experiments to demonstrate the laws of probability.


Kerrich’s main experiment was a very simple one:   he and a fellow prisoner, Eric Chirstensen, flipped a coin ten thousand times and recorded the results.    Now you might scratch your head in confusion when first hearing this— why would someone bother with such an experiment, when it’s so easy for anyone to do at home?    We need to keep in mind that back in 1940, the idea that everyone would have a computer at home (or, as we now do, in their pocket) that they could use for seemingly endless numbers of simulated coin flips, would have seemed like a crazy sci-fi fantasy.    Back then, most people had to manually engage in a physical coin flip or roll a die to generate a random number, a very tedious process.      Technically there were some advanced computers under development at the time that could do the simulation if programmed, but these were being run under highly classified conditions by major government entities.   So recording the value of ten thousand coin flips actually did seem like a useful contribution to math and science at the time.


So, what did Kerrich accomplish with his coin flips?    The main purpose was to demonstrate the Law of Large Numbers.  This is the theorem that says that if you perform an experiment a large number of times, the average result will asymptotically approach the expected value.    In other words, if you have a coin that has 50-50 odds of coming up heads or tails, if you perform lots of trials, you will over time get closer and closer to 50% heads and 50% tails.   Kerrich’s coins got precisely 5,067 heads, and over the course of the experiment got closer and closer to the 50-50 ratio, thus providing reasonable evidence for the Law.    (In any 10000 flips, there is about an 18% chance that we will be off by at least this amount from the precise 50-50 ratio, so this result is reasonable for a single trial.)   


Of course, it might make sense to request another trial of 10000 flips to confirm, for improved confidence in the result.   But apparently even in prison you don’t get bored enough for that— in his book, Kerrich wrote, “A way of answering the… question would be for the original experimenter to obtain a second sequence of 10000 spins,   Now it takes a long time to spin a coin 10000 times, and the man who did it objects strenuously to having to take the trouble of preparing further sets.”


Kerrich and Christensen also did some other experiments along similar lines.    By constructing a fake coin with wood and lead, they created a biased coin to flip, and over the course of many flips demonstrated 70/30 odds for the two sides.   This experiment was probably less interesting because, unlike a standard coin, there likely wasn’t a good way to estimate its expected probabilities before the flips.   A more interesting experiment was the demonstration of Bayes’ Theorem using colored ping-pong balls in a box.   This theorem, as you may recall, helps us calculate the probability of an event when you have some knowledge prior conditions that affect the likelihood of each outcome.    The simple coin flip experiment seems to be the one that has resonated the most with reporters on the Internet though, perhaps because it’s the easiest to understand for anyone without much math background.


In 1946, after the end of the war, Kerrich published his book, “An Experimental Introduction to the Theory of Probability.”.    Again, while it may seem silly these days to worry about publishing experimental confirmation of something so easy to simulate, and which has been theoretically proven on paper with very high confidence anyway, this really did seem like a useful contribution in the days before widespread computers.    The book seems intended for college math students seeking an introduction to probability, and in it Kerrich goes over many basics of the field as demonstrated by his simple experiments.    If you’re curious about the details and the graphs of Kerrich’s results, you can read the book online at openlibrary.org, or click the link in our show notes at mathmutation.com .   Overall, we have to give credit to Kerrich for managing to do something mathematically useful during his World War II imprisonment.    


And this has been your math mutation for today.




References:  


Saturday, March 20, 2021

268: The Right Way to Gamble

Audio Link

At some point, you’ve probably heard an urban legend like this:   someone walks into a Las Vegas casino with his life savings, converts it into chips, and bets it all on one spin of the roulette wheel.    In the version where he wins, he walks out very wealthy.   But, in the more likely scenario where he loses, he leaves totally ruined.    This isn’t just an urban legend, by the way, but it has actually happened numerous times— for example, in 2004, someone named Ashley Revell did this with $135,000, and ended up doubling his money in one spin at roulette.    I most recently read about this incident in a book called “Chancing it:  The Laws of Chance and How They Can Work for You”, by Robert Matthews.   And Matthews makes the intriguing point that if you are desperate to significantly increase your wealth ASAP, and want to maximize the chance of this happening, what Revell did might not be entirely irrational.


Now, before we get into the details, I want to make it clear that I’m not recommending casinos or condoning gambling.    Occasionally while on vacation my wife & I will visit a casino, and here’s my foolproof winning strategy.    First you walk in, let yourself take in the dazzling atmosphere of the flashing lights, sounds, and excitement.    Then walk over to the bar, plop down ten bucks or so, and buy yourself a tasty drink.   Sit down at the counter, take out your smartphone, start up the Kindle app, and read a good book.   (The Matthews book might be a nice choice, linked in the show notes at mathmutation.com.)    Then relax in the comfortable knowledge that you’re ahead of the casino by one pina colada, which you probably value more than the ten dollars at that particular time.   Don’t waste time attempting any of the actual casino games, which always have odds that fundamentally are designed to make you you lose your money.


Anyway, getting back to the Revell story, let’s think for a minute about those rigged odds in a casino.   Basically, the expected value of your winnings, or the probability of winning each value times the amount of money, is always negative.   So, for example, let’s look at betting red or black in roulette.  This might seem like a low-risk bet, since there are two colors, and the payout is 1:1.   When you look closely at the wheel, though, you’ll see that in addition to the 36 red or black numbers, there are 2 others, a green 0 and 00.   Thus, if you bet on red, your chances of winning aren’t 18/36, but 18/38, or about 47.37%.   That’s the sneaky way the casinos get their edge in this case.   As a result, your expected winnings for each dollar you bet are around negative 5.26 cents.  This means that if you play for a large number of games, you will probably suffer a net loss of a little over 5% of your money.


So let’s assume that Revell desperately needed to double his money in one day— perhaps his stockbroker had told him that if he didn’t have $200,000 by midnight, he would lose the chance to invest in the Math Mutation IPO, and he couldn’t bear the thought of missing out on such a cultural milestone.   Would it make more sense for him to divide his money into small bets, say $1000, and play roulette 135 times, or gamble it all at once?   Well, we know that betting it all at once gave about a 47% chance of doubling it— pretty good, almost 50-50 odds, even though the casino still has its slight edge.   But if he had bet it slowly over 100+ games, then the chances would be very high that his overall net winnings would be close to the expected value— so he would expect to lose about 5% of his money, even if he put his winnings in a separate pocket rather than gambling them away.    In other words, in order to quickly double his money in a casino, Revell’s single bold bet really was the most rational way to do it.


And this has been your math mutation for today.




References:  

https://www.amazon.com/dp/B014RT1M1U/








Sunday, February 7, 2021

267: Free Will Isn't Free

 Audio Link

If you’ve browsed the web sometime in the last three decades, especially if you visit any New Age or philosophy websites, you’ve probably come across the argument that quantum mechanics proves the existence of free will.   Superficially, this seems somewhat plausible, in that quantum mechanics blows away our traditional idea that we can fully understand the future behavior of the universe based on observable properties of its particles and forces.   Free will seems like a nice way to fill in the gap.   But when you think about it in slightly more detail, there seems to be a fatal flaw in this argument.   So let’s take a closer look.


To start with, what is so special about quantum mechanics?   This is the area of physics, developed in the 20th century, that tries to explain the behaviors we observe in subatomic particles.   What’s bizarre about it is that according to its calculations, as Einstein stated, God is playing dice with the universe:  we can calculate probabilities of the position and momentum of particles, but not the exact values until we observe them.     For example, suppose I am throwing a toy mouse for my cat to catch.  Since this is a macroscopic object, classical physics works fine:  my cat can take out a calculator, and based on the force I throw with, the mass of the mouse, and the effects of gravity, he will be able to figure out exactly where to pounce.     But now suppose I am throwing a photon for him to catch.   Even if he knows everything that is theoretically knowable about my throw, he will not be able to calculate exactly where it will land— he can only calculate a set of probabilities, and then figure out where the photon went by observing it.    Of course, my cat uses the observation technique even with macroscopic mice, so no doubt he has read a few books on quantum physics and is trying to use the most generally applicable hunting method.


But how does this lead to free will?   The most common argument is that since things happen in the universe that cannot be precisely calculated from all the known properties of its particles and energies, there must be another factor that determines what is happening.   Many physicists seem to believe that a hidden physical factor has been largely ruled out.   That means the missing piece, according to the free will argument, is likely to be human consciousness.    When the activities of the particles in our brain could determine multiple possible outcomes, it is our consciousness that chooses which one will actually happen.    The quantum events in my brain, for instance, could lead with equal probability to me recording a podcast this afternoon, or playing video games.   My free will is needed to make the final choice.


Now time for the fundamental flaw in this argument.    Suppose the quantum activities in my brain do ultimately give me a 50% chance of recording a podcast this afternoon, and a 50% chance of playing videogames.   The implicit assumption in the free will argument is that if the properties of my brain determined completely that I would make a podcast this afternoon, then I would not have free will.   Since there are two alternatives and I have to choose one, there is free will.    However, remember that the quantum calculations provided an exact probability for each event, not just a vague uncertainty, and these probabilities have been well-confirmed in the lab.   If I’m required to roll a 6-sided die, and record the podcast if numbers 1-3 come up, or play videogames if 4-6 comes up, aren’t I just as constrained as if I were only allowed one of those options?   I still don’t get any freedom here.    Either way, it’s not a question of my consciousness, it’s just another calculation, though one whose result cannot be predicted.   If I’m forced to do certain things with known probabilities, that’s the opposite of free will.


There is one more wrinkle here though, which may rescue the free will argument.   Suppose we interpret the quantum calculation slightly differently.   Yes, there is a 50% chance of me recording the podcast, and 50% of playing a videogame.   But maybe this isn’t the universe rolling a die— maybe it’s a measure of the type of personality created by the sum total of quantum interactions in my brain.   So rather than being forced to roll a virtual die and decide, the universe has just built me into the kind of guy who, given a choice, would have a 50-50 chance of podcasting or gaming this afternoon.    All the quantum calculations about my brain activity are just figuring out the type of personality that has been created by its construction.   That argument rescues free will in the presence of quantum probabilities.   Though it does put us in the odd position of wanting to ascribe some kind of conscious will to subatomic particles observed in a physics lab, or to assume some hidden spirits in the room are making decisions for them.


So what’s the answer here?    Well, I’m afraid that as often happens with these types of questions, we will not resolve it in a 5-minute podcast.    You can also find some more subtle arguments that incorporate other aspects of quantum physics, though those don’t look too convincing to me either.   If humanity truly doesn’t have free will, I’ll look forward to the day when someone writes a treatise on how a large Big Bang of hydrogen atoms fundamentally leads to Math Mutation podcasts a few billion years later.


And this has been your math mutation for today.




References:  







Friday, December 25, 2020

266: A Number is a Number

 Audio Link


You may recall that in our last episode we discussed the results of 19th-century attempts to rigorously define the concept of whole numbers.    These attempts culminated in the Peano axioms, a set of simple properties that defined numbers on the basis of the primitive concepts of zero and succession, plus a few related rules.   While this definition has its merits, Bertrand Russell pointed out that it also has some major flaws:  sets that we don’t think of as whole numbers, such as all numbers above 100, all even numbers, or all inverted powers of two, could also satisfy these axioms.   So, how did Russell propose to define numbers?


Here’s Russell’s definition:   “A number is anything which is the number of some class”.    Great, problem solved, we can end the podcast early today!   Or…  maybe not.   Let’s explore Russell’s concepts a bit, to figure out why this definition isn’t as circular as it seems.


The basic concept here is that we think of a whole number as a description of a class of sets, all sets which contain that number of elements.    Let’s take a look at the number 2.   The set of major US political parties, the set of Mars’s moons, and the set of my daughter’s cats are all described by the number two.   But how do we know this?   You might say we could just count the elements in each one to see they have the same number— but Russell points out that that would be cheating, since the concept of counting can only exist if we already know about whole numbers.    So what do we do?


The concept of 1-1 correspondence between sets comes to the rescue.    While we can’t count the elements of a set before we define whole numbers, we can describe similar sets:  a pair of sets are similar if their elements can be put into direct 1-1 correspondence, without any left out.   So despite lacking the intellectual power to count to 2, I can figure out that the number of my daughter’s kittens and the number of moons of Mars are the same:    I’ll map Mars’s moon Phobos to Harvey, and Mars’s moon Deimos to Freya, and see that there are no moons or cats left over.


Thus, we are now able to look at two sets and figure out if they belong in the same class of similar sets.   Russell defines the number of a class of sets as the class of all sets that are similar to it.   Personally, I think this would have been a bit clearer if Russell hadn’t chosen to overload the term ‘number’ here, using it twice with slightly different definitions.  So let’s call the class of similar sets a numerical grouping for clarity.   Then the definition we started with, “A number is anything which is the number of some class”, becomes “A number is anything which is the numerical grouping of some class”, which at least doesn’t sound quite as circular.    The wording gets a little tricky here, and I’m sure some Russell scholars might be offended at my attempt to clarify it, but I think the key concept is this:   A number is defined as a class of sets, all of which can be put into 1-1 correspondence with each other, and which, if we conventionally count them (not allowed in the definition, but used here for clarification), have that number of elements.


Is this more satisfying than the Peano axioms?    Well, if we identify zero with the empty set and the succession operation with adding an element to a set and finding its new number, we can see that those axioms are still satisfied.   Furthermore, this interpretation does seem to rule out the pathological examples Russell mentions:  the numbers greater than 100, even numbers, and inverted powers of two all fail to meet this set-based definition.    And Russell successfully used this definition as the basis for numerous further significant mathematical works.   On the other hand, Russell’s method was not the final word on the matter:   philosophers of mathematics continue to propose and debate alternate definitions of whole numbers to this day.   


Personally, I know a whole number when I see it, and maybe that’s all the definition a non-philosopher needs on a normal day.   But it’s nice to know there are people out there somewhere thinking hard about why this is true.


And this has been your math mutation for today.



References:  

Monday, November 30, 2020

265: Defining Numbers, Sort Of

 Audio Link

One of the amazing things about mathematics in general is the way we can continuously discover and prove new results on the basis of simple definitions and axioms.    But in a way, this concept is similar to those ancient myths that our planet is sitting on the back of a giant turtle.   It sits on another turtle, which sits on another, and it’s turtles all the way down.   Where’s the bottom?    In order to prove anything, we need to start from somewhere, right?    


For millennia after the dawn of mathematics, people generally assumed that the starting point had to be the whole numbers and their basic operations: addition, multiplication, etc.    How much simpler could you get than that?   But in the 19th century, philosophers and mathematicians began serious efforts at trying to improve the overall rigor of their endeavor, by defining simpler notions from which you could derive whole numbers and prove their basic properties.   The average person may not need proofs that whole numbers exist, but mathematicians can be a bit picky sometimes.   One of the most successful of these efforts was by Giuseppe Peano from Italy, who published his set of axioms in 1889.


The Peano axioms can be stated in several equivalent forms, but for the moment we’ll use the version in Bertrand Russell’s nice “Introduction to Mathematical Philosophy”.    As Russell states it, they are based on three primitive notions and 5 axioms.    The notions are the concepts of zero, number, and successor.   Note that he’s not saying we start by knowing what all the numbers are, just that we are assuming some set exists which we are calling “numbers”.  Based on these, the 5 axioms are:

  1. Zero is a number.
  2. The successor of any number is a number.
  3. No two numbers have the same successor.
  4. Zero is not the successor of any number.   (Remember that we’re just defining whole numbers here; negative numbers will potentially be a later extension to the system.)
  5. Induction works:   if some property P belongs to 0, and we can prove that if P is true for some number, it’s true for its successor, then P is true for all numbers.   


With these primitive notions, we can derive the existence of all the whole numbers, without having them known at the start.    For example, we know Zero has a successor which is a number, so let’s label that 1.   Then 1 has a successor number as well, so let’s call that 2, and so on.     We can also define the basic operations we’re used to, simply building on these axioms:  for example, let’s define addition.    We’ll create a plus operation, and define “a + 0” as equal to a for any number a.    Then we can define “a + the successor of b” as “the successor of (a+b)”.    So, for example, a + 1 equals a + “the successor of 0”, which becomes “the successor of a + 0”; by our original definition, this boils down to “the successor of a”.   Thus we have shown that the operation a+1 always leads to a’s successor, a good sign that we are using a reasonable definition of addition.     We can similarly define other operations such as multiplication and inequalities.


The Peano axioms were quite successful and useful, and a great influence on the progression of the foundations of mathematics.    Yet Russell points out that they had a few key flaws.   Think again about the ideas we started with:  zero, successors, and induction.   They certainly apply to the natural numbers… but could they apply to other things, that are not what we would think of as the set of natural numbers?     The answer is yes— there are numerous other sets that can satisfy the axioms.    


- One example:  let’s just define our “zero” for these axioms as the conventional whole number 100.   Then what is being described is the set of whole numbers above 100.   If you think about it for a minute, this won’t violate any of Peano’s axioms—  we are still defining a set of unique numbers with successors, none of which is below our “zero”, and to which induction applies.  

- As another example, let’s keep zero as our conventional zero, but define the “successor” operation as adding 2.   Now our axioms describe the set of even whole numbers.  A very useful set, indeed, but not the true set of whole numbers we were aiming to describe.

- As an even more absurd case, let’s define our “zero” as the conventional number 1, and the successor operation as division by 2.   Then we are describing the infinite progression 1, 1/2, 1/4, 1/8, and so on.    Another very useful series, but not at all matching our intention of describing the whole numbers.


Choosing alternate interpretations of this set, naturally, can lead to very weird interpretations of our derived operations such as addition and multiplication.   But Russell’s point is that despite the power and utility of Peano’s axioms, there is clearly something lacking.    This situation actually reminds me a bit of some challenges we encounter in my day job, proving that computer chip designs work correctly:   it’s nice if you can prove that your axioms lead to desired conclusions for your design, but you also need evidence that there aren’t also BAD designs that would satisfy those axioms equally well.   If such bad designs do exist, your job isn’t quite done.


Russell’s answer to this issue was to seek improved approaches to defining whole numbers based on set theory, which would more precisely correspond to our notion of what these numbers really are.    We’ll discuss this topic in a future podcast.    


And this has been your math mutation for today.



References:  


Sunday, October 11, 2020

264: Unbalanced Society

Audio Link

You may recall that in several past episodes I mentioned the odd pop philosopher Alfred Korzybski and his early 20th-century movement known as General Semantics.   Korzybski believed that the imprecision and misuse of language was responsible for many of society’s ills.   He came up with many supposedly practical ideas to fix this such as using “indexing” and “dating” to add numerical tags to objects you reference, and minimizing the use of the verb “to be” due to its many possible meanings.    A few weeks ago I discovered that one of his books, “Manhood of Humanity”, was downloadable for free at Project Gutenberg, and couldn’t resist taking a look to see if there were any more amusing ideas there.   And I did find one:  a supposed mathematical explanation for the many societal upheavals and conflicts of the past century.


Basically, Korzybski was looking at the pace of change in various fields of knowledge that we have been acquiring over time.   He made much of his observation that we are what he calls “time-binding” creatures:   unlike any other creature on the planet, we can learn things and pass them down to our descendants, so the process of learning and development happens at the level of human society, rather than just of individuals.    According to him, the simple, natural way most knowledge accumulates is through linear or arithmetic progressions:   a series like 2, 4, 6, 8, 10, where you steadily move forward by small jumps.   However, there are certain domains, in areas of science and technology, where in recent centuries every piece of new knowledge has drawn on massive amounts of previous ideas, creating a geometric, or exponential, progression, like 2, 4, 8, 16, 32, etc.    This disconnect is the source of many of our problems.  


In other words, early historical growth in social and political areas roughly tracked with growth of technology, but now technology has zoomed ahead of our social development due to this disconnect.   And this cannot be good for humanity as a whole.   As Korzybski stated, “It is plain as the noon-day sun that, if progress in one of the matters advances according to the law of a geometric progression and the other in accordance with a law of an arithmetical progression, progress in the former matter will very quickly and ever more rapidly outstrip progress in the latter, so that, if the two interests would be interdependent (as they always are), a strain is gradually produced in human affairs, social equilibrium is at length destroyed; there follows a period of readjustment by violence and force.”   He then goes on to state that this is a key cause of insurrections, revolutions, and wars.   


This idea seems like it might actually have something to it, to some degree.    But we do have to be careful here— while everyone is always focused on their own time, and our news media love to sensationalize wars and violence, there has been violence and war throughout the history of society.   Many argue that the real oddity of modern times is the proportion of humanity who can live out their lives secure from violence.     Korzybski did write this just after World War I, though, so we can understand why it might have looked to him like society was falling apart.   


Where he really went off the rails though is when he tried to prescribe a solution for this disconnect in societal vs technological growth:   everyone must use his system of more precise and scientifically defined language, correctly defining ideas like “good”, “bad”, and “truth”, and then we will enable exponential growth in all fields of knowledge.   In his words:  “If only these three words could be scientifically defined, philosophy, law, ethics, and psychology would cease to be private theories or verbalism and they would advance to the rank and dignity of sciences.”     He even claimed that such correct definitions would have lead to the kind of scientific societal reasoning that could have predicted and prevented World War I.    


But when he tried to actually apply his reasoning to a practical matter, we see some slightly more concerning comments.    He tried to use the need to come up with a common base to combine like terms to derive the need for a government to unite the people.   Just as you cannot combine algebraic terms like x^a + y^b without finding  common base, you must find the people a “common base” to unite them.   He wrote, “Germany united the powers of living men and women and children; it gave them a common base; it gave them one common social mood and aim; they all became consolidated in service of that which is called the State… they worked, lived, and died for the State.”    He seemed to like this idea, only complaining that the German leadership then chose the wrong aims for their “united terms”.   


This concept of making the social sciences more precise and mathematical seems to appear continuously in 20th-century writing, from many authors.   It always ultimately fails:  aside from the inherent imprecision of the concepts involved,  there is an obvious need for value judgements that cannot be sensibly derived from any mathematics.    In the end, Korzybski provided a few intriguing ideas, buried within loads and loads of sophistry and nonsense.   I always find it amusing to read this kind of stuff, as long as we all remember not to take it too seriously.    If we want to solve modern society’s problems, we can’t just lie back & let the math provide a magic formula.


And this has been your math mutation for today.



References: