Wednesday, January 28, 2015

Sunstein, Choosing Not to Choose

Cass R. Sunstein, a law professor at Harvard and bestselling author of Nudge, continues his exploration of choice in his latest book, Choosing Not to Choose: Understanding the Value of Choice (Oxford University Press, 2015).

When it comes to choosing, we essentially have three options: impersonal default rules, active choosing, and personalized default rules. Sunstein lays down some guiding principles about which option is preferable in what circumstances. Among them, “impersonal default rules should generally be preferred to active choosing when (1) the context is confusing, technical, and unfamiliar, (2) people would prefer not to choose, (3) learning is not important, and (4) the population is not heterogeneous along any relevant dimension. … [A]ctive choosing should generally be preferred to impersonal default rules when (1) choice architects are biased or lack important information, (2) the context is familiar or nontechnical, (3) people would actually prefer to choose …, (4) learning matters, and (5) there is relevant heterogeneity. To favor active choosing, it is not necessary that all five conditions be met. … [P]ersonalized default rules should generally be preferred to impersonal ones in the face of relevant heterogeneity. “ (pp. 18-19)

Increasingly, as information accumulates about people’s actual choices, personalized default rules will be available, something Sunstein considers to be on balance a plus. I’m going to skip straight to his discussion of this option since it is an obvious follow-up to my piece on The Black Box Society.

Sunstein admits that the idea of personalized default rules raises serious concerns. “Some of these involve narrowing our horizons; others involve the exercise of autonomy; others involve identification and authenticity; still others involve personal privacy.” Even so, in many cases such default rules, he maintains, “can make life not only simpler and more fun but also longer and healthier.” (p. 159)

In an extreme case, we could have a political system with personalized voting defaults, so that people are automatically defaulted into voting for the candidate or party suggested by their previous votes (subject of course to opt-out). But, Sunstein notes, there is a devastating problem with such a voting system, “the internal morality of voting. The very act of voting is supposed to represent an active choice, in which voters are engaged, thinking, participating, and selecting among particular candidates. Of course this is an ideal, and far from a reality for everyone. … But the aspiration is important.” (p. 164)

What about shopping? So far retailers don’t offer default rules, simply annoying (at least to me) recommendations. If you bought a book by a certain author, they suggest, you’ll probably like books by another author who is somehow “similar.” But what if sellers knew, “with perfect or near-perfect certainty,” what people wanted to buy even before they themselves did? (This is creepy big data in full swing.)

Sunstein conducted some surveys to ascertain whether people would approve or disapprove of a scheme where a seller sends you books that it knows you will purchase, and bills you (though you can send the books back if you don’t want them). In a nationwide survey respondents didn’t buy into automatic enrollment—71% disapproved; even if you could voluntarily sign up for such a program, 59% said they would decline to do so. Why wouldn’t everybody opt in? Some may distrust the incentives of the seller, others might view searching for a book as a benefit instead of a cost, and of course people’s preferences change. How many James Patterson books do you really want to buy? “Even if the algorithms are extraordinarily good, they must extrapolate from the past, and the extrapolation might be hazardous if people do not like in the future what they liked in the past, or if they like in the future what they did not like in the past.” (p. 182)

Perhaps, Sunstein suggests, we should distinguish among types of purchases. He offers a two-by-two matrix: easy or automatic, difficult and time-consuming on the x-axis, not fun or pleasurable and fun or pleasurable on the y-axis. In the upper left quadrant we have impulse purchases, where there is little reason for predictive shopping. The upper right quadrant—books, vacations, cars—is again not the obvious place for predictive shopping because for a lot of people such shopping is fun. The lower left quadrant (not fun or pleasurable but easy or automatic) includes household staples. Here the costs of choice are low, so there is no urgent need for automaticity. The lower right quadrant is the one where there would be real value in automaticity. This quadrant includes retirement plans and health insurance. Since for most people the choice of a retirement plan or health insurance is difficult and time-consuming and not fun, “if predictive shopping could be made accurate and easy, there would be a good argument for automatic purchases.” (p. 186)

But beware the slippery slope.

No comments:

Post a Comment