Jon M. Huntsman School of Business

learn about the latest and greatest from the School of Business

Friday, March 7, 2014

Judging a Part by the Size of its Whole: The Category Size Bias in Probability Judgments


By Aaron R. Brough (Huntsman School of Business, Utah State University) and Mathew S. Isaac (Albers School of Business and Economics, Seattle University)

Consumers might be said to have a prediction addiction—they speculate about sports, politics, weather, stocks, sweepstakes, health, and relationships, to name just a few areas. What’s more, predictions often guide their decisions. For example, they may decide to carry an umbrella after considering the chance of rain, to invest after forecasting the stock market’s performance, or to marry after predicting the likelihood of marital bliss. With all this practice, one might expect consumers to be good at judging probability. However, their predictions are often wrong.

One factor contributing to consumers’ flawed judgments is categorization; when making a prediction, consumers may be distracted by how all the possibilities are grouped. Our research looks at one aspect of categorization that hasn’t previously been explored—whether size matters. Our basic question is: can the size of its category make an outcome seem more or less likely to occur? For example, would a spectator feel that a particular Olympic skier is more likely to win an individual gold medal if many (vs. few) skiers in the race are from the same country? Similarly, would an investor expect a specific stock to perform better if many (vs. few) other stocks in the portfolio are classified into the same industry? Or would a patient’s decision to visit the doctor for a lung cancer screening be influenced by whether lung cancer is one of many (vs. few) of the diseases listed on a health brochure as being potentially preventable?

To each of these questions, the rather obvious answer is that category size shouldn’t matter. By analogy, common sense suggests that simply because a bird belongs to a large flock doesn’t mean it’s a large bird. However, we discovered that people don’t seem to apply the same logic when it comes to probability judgments, even though they should. Instead, they sometimes believe that if an outcome is grouped with many other possible outcomes into a large category, it must be more likely to occur. The problem with bundling predictions in this way is that it interferes with people’s likelihood estimates. The “real” likelihood of American Olympic skier Bode Miller winning the slalom shouldn’t be influenced by the number of Americans who also compete in the final race. And yet, when there are more Americans in the race, people may somehow think there’s a greater chance that Bode will win. We call this mistake the category size bias.

In a series of five experiments, we investigated how changes in category size affect people’s judgments about probability and found evidence of the category size bias. We examined predictions across numerous contexts, including games of chance (e.g., lotteries), games of skill (e.g., athletic competitions), and assessments of risk (e.g., security threats). We discovered that predictions can be biased by category size—even when the basis for categorization is completely irrelevant and every outcome is equally likely to occur. For example, in one experiment, participants believed that the probability of winning a lottery was higher if their ticket color happened to be the same as many (vs. few) of the other gamblers’ tickets—and they were willing to wager an average of 24% more as a result. In another experiment, participants estimated a team’s odds of winning the NCAA basketball tournament to be significantly better when the team’s mascot shared similar features with the mascots of many (vs. few) other teams. Participants in yet another experiment judged the risk of an IT security threat to be higher and were more likely to take precautions when their actions were seen as part of a large (vs. small) group of preventative behaviors.

“We found that the accuracy of people’s predictions, as well as related decisions, can be affected by meaningless groupings," Dr. Brough said. "For example, imagine a lottery in which most of the players have a blue ticket and only a few have a yellow ticket. You would correctly conclude that the winning ticket is more likely to be blue than yellow. However, you might also incorrectly conclude that if your ticket is blue you are more likely to win, and that if your ticket is yellow you are less likely to win. Even though every ticket in reality provides an equal chance of winning, people act as though their ticket somehow inherits the probability of its entire group. I like to think of it as a case when bunches change hunches.”

It’s not just that people are bad at math. Instead, they seem to confuse judgments about an entire category with judgments about an individual member of that category. There’s a difference between predicting that any diamond card will be drawn from a standard deck of 52 playing cards and predicting that a specific card (e.g., the Queen of diamonds) will be drawn. Knowing the number of cards in the diamond suit is relevant to the first prediction, but to make the second prediction all you need to know is how many cards are in the deck—categorization is irrelevant. Yet people tend to rely on information about a category when making predictions about an individual category member, and that’s where they get into trouble.

This research aims to provide consumers, businesses, and policy makers with new insights regarding how categorization—and specifically category size—can impact perceptions of risk and probability. Our findings suggest that managing how risk-related information is grouped may increase early detection of preventable diseases and encourage responsible behavior. For example, when policy makers are crafting health-related messages for consumers, grouping a highly preventable disease such as lung cancer with a large (vs. small) number of other potential health risks could increase the perceived risk of contracting lung cancer, which may in turn persuade consumers to visit their doctor for regular screenings. Similarly, to the extent that a fatality report increases perceived risk by grouping car accidents with a large (vs. small) number of avoidable causes of death, drivers may be more inclined to wear a seatbelt. On the flip side, when accurate predictions are the goal, ignoring categories and focusing instead on individual outcomes can help avoid the category size bias and improve accuracy.

Dr. Aaron Brough


2 comments:

  1. Comfortably, the article is in reality the greatest on this noteworthy topic. I concur with your conclusions and will thirstily look forward to your upcoming updates
    share market training in chennai

    ReplyDelete
  2. Never loose money in stock market again. Learn the way to make money consistently without risking much in stock market .We teach equity trading and investing to complete beginners at the most affordable price ever .You will get Share market training in chennai at tamil.

    ReplyDelete