Underwriting uncertainty: Heuristics and biases

Understanding the sources of cognitive and intuitive errors can help give underwriters the tools needed to avoid mistakes.

Eliminating all errors in underwriting will never be possible, but it is possible to eliminate predictable errors. (Photo: Bigstock)

The art of underwriting consists of several key elements. Among these are data gathering, analysis, decision making, and follow-up evaluation as accounts come up for renewal. It is at this point that the underwriter has the opportunity to identify errors made in the initial underwriting process and to correct them.

Errors will always be a part of the process, at least as long as people are involved in any way. Eliminating all errors will never be possible, but it is possible to eliminate predictable errors, once we understand where they are likely to occur.

Novelist W.E.B. Griffin wrote in one of his books “The Victim,” part of his Badge of Honor series, a description of qualities that a good detective would have. I find that they are applicable to underwriters. I paraphrase them as: A good underwriter never forgets that they are ignorant. They know very little about each risk that crosses their desks and should look for any source of information to help diminish the totality of their ignorance. Finally, they must make peace with the fact that some ignorance will always remain.

Identifying heuristics

Nobel laureate Daniel Kahneman, in his bestselling book “Thinking Fast and Slow,” identified a number of ways in which the mind takes shortcuts and makes intuitive leaps — sometimes leaping in the wrong direction.

He describes two systems at work: System 1, which is lightning fast and nearly effortless, and brings answers to mind almost before the question has been fully understood; and System 2, the more deliberate, effortful, and thoughtful side. System 2 is most often engaged on purpose, while System 1 runs on auto-pilot. System 1 uses a number of tools to do its amazing work, among these are pattern recognition, associations, and jumps to estimates of the answers to complex questions. In many, even most circumstances, these intuitive answers that come to mind all but unbidden are correct, or so nearly correct as to not make a difference. But sometimes these intuitive answers are very wrong and predictably wrong.

The combination of persistent ignorance and the System 1 tendency to jump to conclusions can be a recipe for underwriting errors. That’s the bad news. The good news is, many of these errors are predictable, and as such, can be prevented. Some predictable sources of cognitive and intuitive errors are: availability, representativeness, “what you see is all there is,” and bounded rationality.

Kahneman gives a number of examples of these heuristics (shortcuts) that the mind employs, particularly those that lead to the wrong answers, such as: A bat and ball together cost $1.10.  The bat costs a dollar more than the ball. How much is the ball?

Almost without thought the answer $0.10 comes to most people’s minds. Those who are good intuitive mathematicians will often recognize that there is something wrong with that intuitive answer, but it still comes readily to mind. (The actual answer is $0.05 for the ball and $1.05 for the bat.)

Another question he used in his research studies was: Studies have shown that highly intelligent women tend to marry men of lesser intelligence. What causes this behavior?

This is something of a trick question, as there is no underlying cause of the behavior. There is an explanation for the phenomenon, but not a causal explanation. The answer lies in the statistical reality called regression to the mean.

The average person is not highly intelligent, so on average, women of high intelligence will find partners who are closer to average intelligence. Not because they set out to do so, but just because of the probabilities at work. The available pool of highly intelligent potential mates is small; the pool of available mates who are less intelligent is much larger. This is hard for System 1 to see because it is primed and programmed to find what Kahneman calls “causal stories” to bring observed phenomena into clearer focus.

The availability heuristic refers to the tendency to evaluate the frequency of a phenomenon based on how easily examples of the phenomenon can be called to mind.  For example, when asked, “Is Aarav a common first name for boys?” the mind, in the person of System 1, recognizes that this is a tough question to answer — most of us do not keep statistics on which first names are most common top-of-mind. In response to that reality, System 1 does a little magic trick and substitutes an easier question, “How many boys named Aarav have I come across recently?” This is an easier question to answer, and System 1 slides the answer to the easier question into the place of the answer to the harder question.

Representativeness refers to the tendency of System 1 to use stereotyping (your brain was profiling long before profiling was in the media). Rather than focusing on statistical base rates in determining the probability of a given event, System 1 looks at the subject and makes snap judgments based on nothing more than the image in mind; it looks like a duck so it must be a duck, even of the statistical base rates clearly suggest that it is highly likely that it is not, in fact, a duck.

Perhaps the most pernicious of the identified heuristics is “What you see is all there is.” System 1 moves fast, and it doesn’t always take notice of pertinent information that is absent. It can take a conscious effort to evaluate the available data and determine what significant data elements are not present. System 1 will want to move straight to an answer, it requires a deliberate and effortful interjection from system 2 to say, “Hold on, what about X?” This is a very easy trap to fall into as an underwriter.

If all of the information we see on the application fits the guidelines and seems favorable, it can be easy to make the underwriting decision prior to the more thorough analysis that might reveal a key piece of information that is not present. Once System 1 has made a decision, even a faulty one, it can be challenging to reverse the decision even in light of adverse data that becomes available later.

Finally, bounded rationality. This is a term coined by economist Herbert Simon, which was adopted by Kahneman and Tversky when their psychological research identified the same tendencies as Simon had seen in his economic research. Specifically, the term refers to the observation that people behave rationally in many cases, but not always.

There are boundaries around the behavior that is consistently rational, and these boundaries are both observable and predictable. In the case of Kahneman and Tversky’s research, people are not good intuitive statisticians (see the bat and ball question and highly intelligent women question). The average person gets both of these wrong because their System 1 jumps to an appealing answer faster than their System 2 can do the math or call to mind the statistical base rates.

Avoiding underwriter error

Each of the errors of intuition outlined above is predictable and consistent across most of the population. Because of that consistency, we can design procedures and tools to help individual underwriters avoid the mistakes that their System 1 wants to make. Among the readily available tools to help avoid these errors are guidelines, checklists, and the reasonableness test.

A clear set of eligibility and procedural guidelines can help the underwriter to slow down and engage their System 2 along the way. Checklists can also help, especially with avoiding the “what you see is all there is” bias. A thorough checklist of questions to ask and data to gather for each risk can go a long way towards helping underwriters spot gaps in the information provided.

Most useful of all in my experience is the reasonableness test — run the high points of the risk past another underwriter. Their System 1 will be at work also, but engaging in conversation is one of the ways to nudge System 2 into action, for both parties. Sometimes just saying it out loud is enough to bring the soft white underbelly to light. I couldn’t begin to count the number of risks that I have talked over with a coworker that never made it past the opening paragraph — inside the first two minutes, the presenter can see the error that they were about to make (whether it was me or my coworker), and we can get back to work quickly.

Understanding the problem gets us half-way to solving them. Just knowing that we are inclined to make incorrect judgments on the fly can help us to invoke our System 2 earlier in decision making, especially in scenarios that are similar to those outlined above (representativeness can be made to work in our favor in this way). Also being vigilant for instances wherein System 1 tries to substitute an easier question for the harder question that was actually asked can help us to deliberately redirect our thinking back to the tougher question at hand. By making the extra effort upfront, the number of cases in which underwriter errors are not found until after the loss, or when the policy comes around for renewal can be minimized.

Michael Brown, CPCU (michael@goldenbear.com) is vice president and property department manager at Golden Bear Insurance Company. The opinions expressed here are the author’s own. 

More by this author: