Thinking, Fast and Slow

by Daniel Kahneman

Cover image

Publisher: Farrar, Straus and Giroux
Copyright: 2011
ISBN: 1-4299-6935-0
Format: Kindle
Pages: 448

Buy at Powell's Books

Daniel Kahneman is an academic psychologist and the co-winner of the 2002 Nobel Memorial Prize in Economic Sciences for his foundational work on behavioral economics. With his long-time collaborator Amos Tversky, he developed prospect theory, a theory that describes how people chose between probabilistic alternatives involving risk. That collaboration is the subject of Michael Lewis's book The Undoing Project, which I have not yet read but almost certainly will.

This book is not only about Kahneman's own work, although there's a lot of that here. It's a general overview of cognitive biases and errors as explained through an inaccurate but useful simplification: modeling human thought processes as two competing systems with different priorities, advantages, and weaknesses. The book mostly focuses on the contrast between the fast, intuitive System One and the slower, systematic System Two, hence the title, but the last section of the book gets into hedonic psychology (the study of what makes experiences pleasant or unpleasant). That section introduces a separate, if similar, split between the experiencing self and the remembering self.

I read this book for the work book club, although I only got through about a third of it before we met to discuss it. For academic psychology, it's quite readable and jargon-free, but it's still not the sort of book that's easy to read quickly. Kahneman's standard pattern is to describe an oddity in thinking that he noticed, a theory about the possible cause, and the outcome of a set of small experiments he and others developed to test that theory. There are a lot of those small experiments, and all the betting games with various odds and different amounts of money blurred together unless I read slowly and carefully.

Those experiments also raise the elephant in the room, at least for me: how valid are they? Psychology is one of the fields facing a replication crisis. Researchers who try to reproduce famous experiments are able to do so only about half the time. On top of that, many of the experiments Kahneman references here felt artificial. In daily life, people spend very little time making bets of small amounts of money on outcomes with known odds. The bets are more likely to be for more complicated things such as well-being or happiness, and the odds of most real-world situations are endlessly murky. How much does that undermine Kahneman's conclusions? Kahneman himself takes the validity of this type of experiment for granted and seems uninterested in this question, at least in this book. He has a Nobel Prize and I don't, so I'm inclined to trust him, but it does give me some pause.

It didn't help that Kahneman cites the infamous marshmallow experiment approvingly and without caveats, which is a pet peeve of mine and means he fails my normal test for whether a popular psychology writer has taken a sufficiently thoughtful approach to analyzing the validity of experiments.

That caveat aside, this book is fascinating. One of the things that Kahneman does throughout, which is both entertaining and convincing, is show the reader one's brain making mistakes in real time. It's a similar experience to looking at optical illusions (indeed, Kahneman makes that comparison explicitly). Once told what's going on, you can see the right answer, but your brain is still determined to make an error.

Here's an example:

A bat and ball cost $1.10.
The bat costs one dollar more than the ball.
How much does the ball cost?

I've prepped you by talking about cognitive errors, so you will probably figure out that the answer is not 10 cents, but notice how much your brain wants the answer to be 10 cents, and how easy it is to be satisfied with that answer if you don't care that much about the problem, even though it's wrong. The book is full of small examples like this.

Kahneman's explanation for the cognitive mistake in this example is the subject of the first part of the book: two-system thinking. System one is fast, intuitive, pattern-matching, and effortless. It's our default, the system we use to navigate most of our lives. System two is deliberate, slow, methodical, and more accurate, but it's effortful, to a degree that the effort can be detected in a laboratory by looking for telltale signs of concentration. System two applies systematic rules, such as the process for multiplying two-digit numbers together or solving math problems like the above example correctly, but it takes energy to do this, and humans have a limited amount of that energy. System two is therefore lazy; if system one comes up with a plausible answer, system two tends to accept it as good enough.

This in turn provides an explanation for a wealth of cognitive biases that Kahneman discusses in part two, including anchoring, availability, and framing. System one is bad at probability calculations and relies heavily on availability. For example, when asked how common something is, system one will attempt to recall an example of that thing. If an example comes readily to mind, system one will decide that it's common; if it takes a lot of effort to think of an example, system one will decide it's rare. This leads to endless mistakes, such as worrying about memorable "movie plot" threats such as terrorism while downplaying the risks of far more common events such as car accidents and influenza.

The third part of the book is about overconfidence, specifically the prevalent belief that our judgments about the world are more accurate than they are and that the role of chance is less than it actually is. This includes a wonderful personal anecdote from Kahneman's time in the Israeli military evaluating new recruits to determine what roles they would be suited for. Even after receiving clear evidence that their judgments were no better than random chance, everyone involved kept treating the interview process as if it had some validity. (I was pleased by the confirmation of my personal bias that interviewing is often a vast waste of everyone's time.)

One fascinating takeaway from this section is that experts are good at making specific observations of fact that an untrained person would miss, but are bad at weighing those facts intuitively to reach a conclusion. Keeping expert judgment of decision factors but replacing the final decision-making process with a simple algorithm can provide a significant improvement in the quality of judgments. One example Kahneman uses is the Apgar score, now widely used to determine whether a newborn is at risk of a medical problem.

The fourth part of the book discusses prospect theory, and this is where I got a bit lost in the endless small artificial gambles. However, the core idea is simple and quite fascinating: humans tend to make decisions based on the potential value of losses and gains, not the final outcome, and the way losses and gains are evaluated is not symmetric and not mathematical. Humans are loss-avoiding, willing to give up expected value to avoid something framed as a loss, and are willing to pay a premium for certainty. Intuition also breaks down at the extremes; people are very bad at correctly understanding odds like 1%, instead treating it like 0% or more than 5% depending on the framing.

I was impressed that Kahneman describes the decision-making model that preceded prospect theory, explains why it was more desirable because it was simpler and was only abandoned for prospect theory because prospect theory made meaningfully more accurate predictions, and then pivots to pointing out the places where prospect theory is clearly wrong and an even more complicated model would be needed. It's a lovely bit of intellectual rigor and honesty that too often is missing from both popularizations and from people talking about their own work.

Finally, the fifth section of the book is about the difference between life as experienced and life as it is remembered. This includes a fascinating ethical dilemma: the remembering self is highly sensitive to how unpleasant an experience was at its conclusion, but remarkably insensitive to the duration of pain. Experiments will indicate that someone will have a less negative memory of a painful event where the pain gradually decreased at the end, compared to an event where the pain was at its worst at the end. This is true even if the worst moment of pain was the same in both cases and the second event was shorter overall. How should we react to that in choosing medical interventions? The intuitive choice for pain reduction is to minimize the total length of time someone is in pain or reduce the worst moment of pain, both of which are correctly reported as less painful in the moment. But this is not the approach that will be remembered as less painful later. Which of those experiences is more "real"?

There's a lot of stuff in this book, and if you are someone who (unlike me) is capable of reading more than one book at a time, it may be a good book to read slowly in between other things. Reading it straight through, I got tired of the endless descriptions of experimental setup. But the two-system description resonated with me strongly; I recognized a lot of elements of my quick intuition (and my errors in judgment based on how easy it is to recall an example) in the system one description, and Kahneman's description of the laziness of system two was almost too on point. The later chapters were useful primarily as a source of interesting trivia (and perhaps a trick to improve my memory of unpleasant events), but I think being exposed to the two-system model would benefit everyone. It's a quick and convincing way to remember to be wary of whole classes of cognitive errors.

Overall, this was readable, only occasionally dense, and definitely thought-provoking, if quite long. Recommended if any of the topics I've mentioned sound interesting.

Rating: 7 out of 10

Reviewed: 2019-08-23

Last spun 2022-02-06 from thread modified 2019-08-24