Sunday, June 24, 2018

242: Effort and Impact

Audio Link


I’m always amused by management classes and papers that take some trivial and obvious mathematical model, give it a fancy name and some imprecise graphs, and proudly proclaim their brilliant insights.    Now of course, such models may be a useful tool to help think through daily issues, and should be used implicitly as a simple matter of common sense, but it’s really important to recognize their limitations.   Many limitations come from the imprecision of the math involved, human psychological factors, or the difficulty of translating the many constraints of a real-life situation into a few simple parameters of a model.    I was recently reminded of a classic example of this pitfall, “Impact-Effort Analysis”, when reading a nice online article by my former co-worker Brent Logan.   While the core concepts of impact-effort analysis are straightforward, it is critical to recognize the limitations of trying to apply this simple math to real-life problems.

The basic concept of impact-effort analysis is that, when you are trying to decide what tasks to work on, you should create a list and assign to each an “impact”, the value it will bring to you in the end, and an “effort”, the cost of carrying out the task.   You then graph all the tasks on an impact vs effort graph, which you divide into four quadrants, and see which quadrant each falls into.   For example, you may be considering ways to promote your new math podcast.   (By the way, this is just a hypothetical, who would dare compete with Math Mutation?)     A multimedia ad campaign would be high effort but high impact., the quadrant known as a “big bet”.   Posting a note to your 12 friends on your personal Facebook page would be low effort and low impact, which we label as “incremental”.     Hiring a fleet of joggers to run across the country wearing signboards with your URL would probably be high effort and low impact, which we label the “money pit” quadrant.    And finally, using your personal connections with Weird Al Yancovic to have him write a new hit single satirizing your podcast is low effort and high impact— this is in the ideal quadrant, the “easy wins”.   (And no, I don’t really have such connections, sorry!)

This sounds like a sensible and straightforward idea— who could argue with rating your tasks this way in order to prioritize?    Well, that is true:  if you can correctly assign impact and effort values, this method is very rational.    But Logan points out that out old friends, the cognitive biases, play a huge role in how we come up with these numbers in real life.    

On the side of effort estimation, psychological researchers have consistently found that small teams underestimate the time it takes to complete a task.   Kahneman and Tversky have labeled this the “Planning Fallacy”.   Factors influencing this include simple optimism, failing to anticipate random events, overhead of communication, and similar issues.   There is also the question of dealing with constant pressure of bosses who want aggressive-looking schedules to impress their superiors.     You may have heard the well-known software engineering saying that stems from this bias:   “The first 90% of the project takes the first 90% of the time, and the last 10% of the project takes the other 90% of the time.”

Similarly, on the impact side, it’s very easy to overestimate:  when you get excited about something you’re doing, especially when the results cannot be measured in advance.    In 2003, Kahneman and Lovallo extended their definition of the Planning Fallacy to include this aspect as well.  I find this one pretty easy to believe, looking at the way things tend to work at most companies.    If you can claim you, quote, “provided leadership” by getting the team to do something, you’re first in line for raises, promotions, etc.   Somehow I’ve never seen my department’s VP bring someone up to the front of an assembly and give them the “Worry Wart Award” and promotion for correctly pointing out that one of his ideas was a dumb waste of time.  If your company does that effectively, I may be sending them a resume.

Then there are yet more complications to the impact-effort model.   If you think carefully about it, you realize that it has one more limitation, as pointed out in Logan’s article:   impact ratings need to include negative numbers.   That fleet of podcast-promoting joggers, for example, might cost more than a decade of profits from the podcast, despite the fact that the brilliant idea earned you a raise.    Thus, in addition to the four quadrants mentioned earlier, we need a category of “loss generators”, a new pseudo-quadrant at the bottom covering anything where the impact is below zero.

So what do we do about this?    Well, due to these tendencies to misestimate, and the danger of loss generators, we need to realize that the four quadrants of our effort-impact graph are not homogeneous or of equal size.    We need to have a very high threshold for estimated impact, and a low threshold for estimated effort, to compensate for these biases— so the “money pit” quadrant becomes a gigantic one, the “easy win” a tiny one, and the other two proportionally reduced.    Unfortunately, it’s really hard to compensate in this way with this type of model, since human judgement calls are so big a factor in the estimates.    You might subconsciously change your estimates to compensate.   Maybe if you had your heart set on that fleet of joggers, you would decide that its high impact should be taken as a given, so you rate it just enough to keep it out of the money pit.

Logan provides a few suggestions for making this type of analysis a bit less risk-prone.   Using real data whenever possible to get effort and impact numbers is probably the most important one.    Also in this category is “A/B testing”, where you do a controlled experiment comparing multiple methods, on a small scale before making a big investment.   Similarly, he suggests breaking tasks into smaller subtasks as much as possible— the smaller the task, the easier it is to come up with a reasonable estimate.   Finally, factoring in a confidence level to each estimate can also help.     I think one other implied factor that Logan doesn’t state explicitly is that we need to be consistently agile:   as a project starts and new data becomes available, constantly re-examine our efforts and impacts to decide whether some re-planning may be justified.

But the key lesson here is that there are always a lot of subtle details in this type of planning, due to the uncertainties of business and of human psychology.  Always remember that writing  down some numbers and drawing a pretty graph does NOT mean that you have successfully used math to provide indisputable rationality to your business decisions.  You must constantly take ongoing followup actions to make up for the fact that human brains cannot be boiled down to a few numbers so simply.

And this has been your math mutation for today.

References:


Sunday, June 3, 2018

241: A Strange Fixation

Audio Link


Hi everyone.   Before we get started, I want to thank listener ‘katenmkate’ for posting another nice review in Apple Podcasts recently.   Thanks Kate!   By the way, other listeneres:  the podcast world is constantly growing more competitive, so a new positive review is still useful— please think about posting one if you like the podcast.   And the same comment goes for the Math Mutation book, which is available & reviewable on Amazon.   Also remember, if you like Math Mutation enough that you want to donate money, please make a donation to your favorite charity & email me about it, and I’ll mention you on the podcast for that as well.

Now for today’s topic.   Try the following experiment:   Take any 4-digit number and generate 2 values from it by arranging the digits in ascending and then descending order.   By the way, don’t choose a degenerate case where the number has all digits the same, like ‘2222’.   (For today I’ll pronounce 4-digit numbers just by saying the digits, to save time.)   Now subtract the smaller 4-digit number from the larger one.  You should find that you get another 4-digit number (or a 3-digit one, in which case you can add a leading 0.)   Repeat the process for a few steps.   Do you notice anything interesting?

For example, let’s start with 3524.  After we generate numbers by the orderings of the digits, we subtract 5432 - 2345, and get 3087.    Repeating the procedure, we subtract 8730-0378, and get 8352.   One more time, and we do 8532 - 2358, to get 6174.   So far this just seems like a weird procedure for generating arbitrary numbers, maybe a bit of a curiosity but nothing special.   But wait for the next step.   From 6174, we do 7641 - 1467, which gives us… 6174 again!   We have reached an attractor, or fixed point:   repeating our process always brings us back to this same number, 6174.   If you start with any other non-degenerate 4-digit number, you will find that you still get back to this same value, though the number of steps may vary.

This value, 6174, is known as Kaprekar’s Constant,  named after the Indian mathematician who discovered this strange property back in 1949.   Strangely, I was unable to find any mention of an elegant proof of why this number is special; it seems like it’s just one of those arbitrary coincidences of number theory.  It’s easy to prove the correctness of the Constant algebraically, but that really doesn’t provide much fundamental insight.   This constant does have some interesting additional properties though:   in the show notes, you’ll find a link to a fun website by a professor named Girish Arabale.    He uses a computer program to plot how many steps of Kaprekar’s operation it takes to reach the constant, indicating the step count by color, and generates some pretty-looking patterns.   The web page even showcases some music generated by this operation, though I still prefer David Bowie. 

We see more strange properties if we investigate different numbers of digits.   There is a corresponding constant for 3-digit operations, 495, but not with any higher digit counts.   If you try the same operation with 2, 5 or 6 digit numbers, for example, you will eventually settle into a small loop of numbers which generate each other, or a small set of possible self-generating constants, rather than a single arbitrary constant.    Let’s try 2 digits, starting arbitrarily at 90:      90 - 09 = 81.   81 - 18 = 63.  63 - 36 = 27.   72 - 27 = 45.  54 - 45 = 09— and you can see we’re now in a loop.   If we continue the operation, we will repeat  9, 81, 63, 27, 45, 9 forever!   You will see similar results if trying the operation in bases other than 10, where the meaning of the digits is slightly different and a result similar to Kaprekar’s doesn’t appear so easily.

One other aspect of Kaprekar’s Constant that we should point out is that this is really just a specific example of the general phenomenon of “attractors”.   If you take an arbitrary iterative operation, where you repeat some calculation that leads you back into the same space of numbers, it’s not that unusual to find that you eventually settle into a small loop, a small set of constants, or a fixed point like Kaprekar’s Constant.   I think his constant became famous because it’s described with a relatively simple operation, and was discovered relatively early in the history of popularization of the mathematical concept of attractors.    

Thinking about such things has actually become very important in some areas of computer science, such as the generation of pseudo-random numbers.   We often want to use a repeated mathematical procedure to generate numbers that look random to the consumer, but from the same start point, called a “seed”, will always generate the same numbers.   This is important in areas like simulation testing of engineering designs.   In my day job, for example, I might want to see how a proposed new computer chip would react to a set of several million random stimuli, but in case something strange happens, I want to easily reproduce the same set of supposedly “random” input later.     The typical solution is a pseudo-random number generator, where the previous number generates the next one through some function.

Kaprekar’s Constant helps demonstrate that if we choose an arbitrary mathematical procedure to repeatedly generate numbers from other ones, the results might not end up as arbitrary or random-looking as you would expect.   You can create some convoluted repetitive procedure that really has no sensible reason in its definition for creating a predictable pattern, like Kaprekar’s procedure, and suddenly find that it always leads you down an elegant path to a specific constant or a small loop of repeated results.   Thus, the designers of pseudo-random number generators must be very careful that their programs really do create random-seeming results, and are free of the hazards of attractors and fixed points like Kaprekar’s Constant.

And this has been your Math Mutation for today.


References: