I’m always amused by management classes and papers that take some trivial and obvious mathematical model, give it a fancy name and some imprecise graphs, and proudly proclaim their brilliant insights. Now of course, such models may be a useful tool to help think through daily issues, and should be used implicitly as a simple matter of common sense, but it’s really important to recognize their limitations. Many limitations come from the imprecision of the math involved, human psychological factors, or the difficulty of translating the many constraints of a real-life situation into a few simple parameters of a model. I was recently reminded of a classic example of this pitfall, “Impact-Effort Analysis”, when reading a nice online article by my former co-worker Brent Logan. While the core concepts of impact-effort analysis are straightforward, it is critical to recognize the limitations of trying to apply this simple math to real-life problems.
The basic concept of impact-effort analysis is that, when you are trying to decide what tasks to work on, you should create a list and assign to each an “impact”, the value it will bring to you in the end, and an “effort”, the cost of carrying out the task. You then graph all the tasks on an impact vs effort graph, which you divide into four quadrants, and see which quadrant each falls into. For example, you may be considering ways to promote your new math podcast. (By the way, this is just a hypothetical, who would dare compete with Math Mutation?) A multimedia ad campaign would be high effort but high impact., the quadrant known as a “big bet”. Posting a note to your 12 friends on your personal Facebook page would be low effort and low impact, which we label as “incremental”. Hiring a fleet of joggers to run across the country wearing signboards with your URL would probably be high effort and low impact, which we label the “money pit” quadrant. And finally, using your personal connections with Weird Al Yancovic to have him write a new hit single satirizing your podcast is low effort and high impact— this is in the ideal quadrant, the “easy wins”. (And no, I don’t really have such connections, sorry!)
This sounds like a sensible and straightforward idea— who could argue with rating your tasks this way in order to prioritize? Well, that is true: if you can correctly assign impact and effort values, this method is very rational. But Logan points out that out old friends, the cognitive biases, play a huge role in how we come up with these numbers in real life.
On the side of effort estimation, psychological researchers have consistently found that small teams underestimate the time it takes to complete a task. Kahneman and Tversky have labeled this the “Planning Fallacy”. Factors influencing this include simple optimism, failing to anticipate random events, overhead of communication, and similar issues. There is also the question of dealing with constant pressure of bosses who want aggressive-looking schedules to impress their superiors. You may have heard the well-known software engineering saying that stems from this bias: “The first 90% of the project takes the first 90% of the time, and the last 10% of the project takes the other 90% of the time.”
Similarly, on the impact side, it’s very easy to overestimate: when you get excited about something you’re doing, especially when the results cannot be measured in advance. In 2003, Kahneman and Lovallo extended their definition of the Planning Fallacy to include this aspect as well. I find this one pretty easy to believe, looking at the way things tend to work at most companies. If you can claim you, quote, “provided leadership” by getting the team to do something, you’re first in line for raises, promotions, etc. Somehow I’ve never seen my department’s VP bring someone up to the front of an assembly and give them the “Worry Wart Award” and promotion for correctly pointing out that one of his ideas was a dumb waste of time. If your company does that effectively, I may be sending them a resume.
Then there are yet more complications to the impact-effort model. If you think carefully about it, you realize that it has one more limitation, as pointed out in Logan’s article: impact ratings need to include negative numbers. That fleet of podcast-promoting joggers, for example, might cost more than a decade of profits from the podcast, despite the fact that the brilliant idea earned you a raise. Thus, in addition to the four quadrants mentioned earlier, we need a category of “loss generators”, a new pseudo-quadrant at the bottom covering anything where the impact is below zero.
So what do we do about this? Well, due to these tendencies to misestimate, and the danger of loss generators, we need to realize that the four quadrants of our effort-impact graph are not homogeneous or of equal size. We need to have a very high threshold for estimated impact, and a low threshold for estimated effort, to compensate for these biases— so the “money pit” quadrant becomes a gigantic one, the “easy win” a tiny one, and the other two proportionally reduced. Unfortunately, it’s really hard to compensate in this way with this type of model, since human judgement calls are so big a factor in the estimates. You might subconsciously change your estimates to compensate. Maybe if you had your heart set on that fleet of joggers, you would decide that its high impact should be taken as a given, so you rate it just enough to keep it out of the money pit.
Logan provides a few suggestions for making this type of analysis a bit less risk-prone. Using real data whenever possible to get effort and impact numbers is probably the most important one. Also in this category is “A/B testing”, where you do a controlled experiment comparing multiple methods, on a small scale before making a big investment. Similarly, he suggests breaking tasks into smaller subtasks as much as possible— the smaller the task, the easier it is to come up with a reasonable estimate. Finally, factoring in a confidence level to each estimate can also help. I think one other implied factor that Logan doesn’t state explicitly is that we need to be consistently agile: as a project starts and new data becomes available, constantly re-examine our efforts and impacts to decide whether some re-planning may be justified.
But the key lesson here is that there are always a lot of subtle details in this type of planning, due to the uncertainties of business and of human psychology. Always remember that writing down some numbers and drawing a pretty graph does NOT mean that you have successfully used math to provide indisputable rationality to your business decisions. You must constantly take ongoing followup actions to make up for the fact that human brains cannot be boiled down to a few numbers so simply.
And this has been your math mutation for today.