The word “noise” has a special meaning in fields like statistics. Referring to all the reasons why some result deviates from the ideal; like an incorrect prediction. It’s the concept of distinguishing noise from signal.
When people are convicted of identical crimes, with similar backgrounds and circumstances, etc., we nevertheless expect sentences to differ. That too is “noise.” But we don’t expect sentences to vary from thirty days to five years. Nor expect medical diagnoses to be very noisy, differing greatly from one doctor to another.
Such expectations are often wrong, with noise being a bigger problem than we realize. So says the 2021 book Noise – A Flaw in Human Judgment, by Daniel Kahneman, Olivier Sibony, and Cass Sunstein.
The book refers to “stable pattern noise,” encompassing characteristics about you, different from other people’s, that make your judgments differ; and “occasion noise,” referring to extraneous factors — like your mood at a given moment — that also affect them. Perhaps confusingly, both “stable pattern” and “occasion” noise are subsets of overall “pattern noise.”
And the book also differentiates “level noise” — for example, different judges being generally tougher or more lenient — from (again) “pattern noise” when they differ in how they apply that in specific cases. The authors further speak of “system noise” as encompassing the last two together. You got all that? There’s also bias. And plain old error. All told, a whole lotta noise.
Early on, the book talks about insurance underwriters — professionals tasked with setting premiums to be charged corporate customers. Too high and the insurer will lose business. Too low and it loses money. When asked to guesstimate the variance among quotes by experienced underwriters (that is, the noise quotient), insurance executives typically say 10% or 15%. In reality it’s more like ten times greater. With hundreds of millions of dollars at stake.
The authors quote one veteran underwriter: “When I was new I would discuss 75% of cases with my supervisor . . . After a few years, I didn’t need to — I am now regarded as an expert . . . Over time I became more and more confident in my judgment.”
Now here’s the key point: her confidence grew as she “learned to agree with her past self.” Not from any objective confirmation that those past judgements were, in any sense, correct.
This describes a vast range of human psychology and behavior. It is, quite simply, doing what one’s always done. With no deep consideration of that behavior’s optimality. But — if we actually tried subjecting ourselves to such examination, comprehensively, we couldn’t function. Probably couldn’t get ourselves out of bed in the morning. That has to be recognized — even while we must recognize the suboptimality.
This applies to Kahneman’s entire well-known oeuvre — Thinking Fast and Slow, etc. Showing how evolution has saddled us with many non-rational biases in our thinking. Like putting more weight on potential losses than on equal potential gains. Because, for our distant ancestors, “loss” could very well mean loss of life. So a loss avoidance bias made sense.
But even if many of our cognitive biases are not rational, in a narrow sense, the whole system of cognition they comprise is deeply rational. Because, again, we couldn’t function if we had to subject every daily decision or choice to conscious examination. To avoid that, evolution has given us a system of cognitive shortcuts and quick decision heuristics. (The fast thinking of Kahneman’s book title.) And it must be a terrific system because it does enable most humans to function extremely well from moment to moment — and from year to year.
One concept that has grown in my thinking is the role of contingency in human affairs — ranging from individuals to groups to whole societies and their history. I have long been mindful of this effect in my own life, with tiny causes altering its whole course. The Noise book presents much evidence for how individual and group decisions can be affected by such small contingencies. Like something so simple, and seemingly unimportant, as who speaks first in a meeting. Jury deliberations a particular focus of concern. The authors write about cascades, describing how even just one expressed opinion can trigger a succession of responses by other people, not realizing how they’d been unconsciously influenced.
A striking finding is that in making various kinds of judgments or predictions, based on various bits of information, mechanical formulas almost always do better than human analysts, even supposed experts. The key reason — humans are just too plagued by noise. And so we see growing recourse to artificial intelligence to make evaluations, like medical diagnoses.
More: when a human evaluates a set of variables to come up with a judgment, it’s not a formulaic process, yet it’s as if a formula is being applied, albeit a complex one. Studies have found that when such an actual human’s judgments are made the basis for a computer model, which is then applied to the same variables, the model outperforms the human. We may think we bring complexity and richness and insight into our judgments. But what we really bring is noise.
And more: not only do such models outperform the humans they model, studies have found that any mechanistic formula, even randomly weighted, applied to the set of variables in play, will do better than “expert” human judgments.
But supplanting human judgments with mechanistic decision methods provokes backlash. When noisiness in criminal sentencing became evident, the consequently enacted federal sentencing guidelines led to objections that this interfered with judges, well, judging. People do still value the idea of human judgment, bringing a “holistic” perspective to any decision. “This has deep intuitive appeal,” the authors acknowledge.
But, they say, their recommended “decision hygiene” strategies mostly aren’t mechanistic, not jettisoning human judgment. Instead, they mainly urge noise reduction by breaking problems down into component parts. And recognize that while reducing noise is broadly desirable, excessive fixation on it can conflict with other values. Noise is like dirt in your home — its optimal amount is not zero, because attaining zero costs more than it’s worth.
Intelligence also helps combat noise. Yes, “intelligence” is a fraught concept. But the book argues that in fact, tests of “General Mental Ability” are highly predictive of performance. High achievers overwhelmingly tend to have higher GMAs. Even within the top 1%, gradations actually make a big difference. Someone in the 99.8% GMA percentile will likely significantly outperform the 99.0% person. (My own example bears this out. I think I’m at least close to 99%, but not higher. And I feel that difference, compared to really smart people.)
Conversely, lower GMA scores are predictive of people believing in bunk like astrology and falling for fake news. Here’s a GMA test question: in a race, you pass the runner in second place. What place are you in now? Your instinctive answer is likely wrong.
But on the other hand, I’ve long believed that carefully agonizing over a decision doesn’t necessarily improve upon your initial gut response. One chapter began by asking what percentage of the world’s airports are in the United States? “Thirty percent” immediately popped into my head. Then I said to myself, “Wait, let’s think methodically about this.” America has less than 5% of global population. But some big countries are much less developed. And we have a lot of little airports. Mulling over it all, I revised my answer to 15%.
The question introduced a discussion of how one’s first instinctive response is often actually better than a carefully considered one (because the latter is corrupted by noise). The correct answer: 32%!