In lieu of an abstract, here is a brief excerpt of the content:

  • The Experimental Imperative
  • Peter A. Ubel (bio)

You hear about a new drug, and you learn first about its risks, then about its benefits. You weigh them and decide that the drug is in your best interests. Now you learn about another drug, first finding out about its benefits, then its risks. You're not so keen to take this one. It doesn't feel like it is in your best interests.

Would you be surprised to find out that these pills are the same drug?

Decisions require information. But providing information in a neutral manner is difficult. In the case above, the problem is a decision bias called the recency effect—the most recent piece of information people received played a disproportionate role in their judgment.

In his provocative essay, Peter Schwartz raises important questions about what information should be provided to whom. He criticizes what he calls the quantitative imperative—the belief that patients should be informed, in numerical terms, about the risks and benefits of their treatment alternatives. I am not going to join Schwartz in debating whether numerical information should be mandatory. Instead, I want to lobby to make something else mandatory—experiments! Those of us who care about patient autonomy and informed consent should work to find out what kind and manner of information will be most useful and least biasing to the largest number of people. Call it the experimental imperative—a mandate for more and better experiments.

Imagine you are a woman who, due to medical history, faces a 3 percent chance of being diagnosed with breast cancer in the next five years. You are deciding whether to take tamoxifen over that five-year period to cut your risk to 1.5 percent. Your doctor presents you with a state-of-the-art decision aid. It turns out—as shown in a series of studies that my colleagues and I have conducted—that whether the decision aid improves your decision depends on how it is put together.

We started off by tackling the numeracy problem that Schwartz describes so eloquently. Over time, we determined that pictographs overcome many comprehension problems. The vast majority of people can interpret pictographs quickly and accurately, figuring out the basic gist of the information. Based on this research, we designed our decision aid to represent each piece of numerical information with a pictograph.

Second, we determined that side-by-side risk graphs often confuse people. Suppose a decision aid shows people the likelihood that a medication will cause heart attacks. One pictograph could illustrate the baseline risk of 10 percent by showing ten out of one hundred squares shaded differently, and a second could illustrate the risk if one takes the pill by showing twelve out of one hundred shaded. Many people have trouble figuring out from this that the drug causes heart attacks in 2 percent of people. Incremental pictographs work better: the first showing, say, ten blue squares, and the next still showing ten blue squares but also two orange squares, with text explaining that the orange squares represent the added chance of heart attacks.

More importantly, we discovered that incremental pictographs also make people less susceptible to decision biases. For instance, people are more afraid of a side effect that occurs in two out of one hundred people than they are of the same side effect occurring in twenty out of one thousand people. Yes, gentle readers: same risk, very different feeling! The latter sounds more frightening; it conjures up an image of twenty people harmed instead of two. But the denominator bias, as it is called, is much less likely to occur when people are presented with incremental risk information.

Finally, we discovered through randomized survey experiments that women who received contextual risk information were impervious to the recency effect. When we informed women not only about their five-year risk of breast cancer, but also about their five-year risk of colon cancer, heart attack, and all-cause mortality, their response was no longer affected by the order in which they received information on risks and benefits.

We need skeptics like Schwartz to keep us from assuming that information is always the solution...

pdf

Share