Published and Forthcoming Manuscripts
Using Methods from Machine Learning to Evaluate Behavioral Models of Choice Under Risk and Ambiguity
with Alex Peysakhovich
[Journal of Economic Behavior and Organization, 2017]
How can behavioral scientists incorporate tools from machine learning (ML)? We propose that ML models can be used as upper bounds for the “explainable” variance in a given data set and thus serve as upper bounds for the potential power of a theory. We demonstrate this method in the domain of uncertainty. We ask 600 individuals to make 6000 choices with randomized parameters and compare standard economic models to ML models. In the domain of risk, a version of expected utility that allows for non-linear probability weighting (as in cumulative prospect theory) and individual-level parameters performs as well out-of-sample as ML techniques. By contrast, in the domain of ambiguity, two of the most widely studied models (a linear version of maximin preferences and second order expected utility) fail to compete with the ML methods. We open the “black boxes” of the ML methods and show that under risk our ML methods essentially “rediscover” expected utility with probability weighting. However, in the case of ambiguity we show that the form of ambiguity aversion implied by our ML models suggests that there is gain from theoretical work on a portable model of ambiguity aversion. Our results highlight ways in which behavioral scientists can incorporate ML techniques in their daily practice to gain genuinely new insights.
with Christine Exley
[Management Science, 2017]
Previous research often interprets the choice to restrict one’s future opportunity set as evidence for sophisticated time-inconsistency. We propose an additional mechanism that may contribute to the demand for commitment technology: the desire to signal to others. We present a field experiment where participants can choose to give up money if they do not follow through with an action. When commitment choices are made public rather than kept private, we find significantly higher uptake rates.
with James Andreoni, Deniz Aydin, Blake Barton, and B. Douglas Bernheim
[accepted at the Journal of Political Economy]
In settings with uncertainty, tension exists between ex ante and ex post notions of fairness (e.g., equal opportunity versus equal outcomes). In a laboratory experiment, the most common behavioral pattern is for subjects to select the ex ante fair alternative ex ante, and switch to the ex post fair alternative ex post. One potential explanation embraces consequentialism and construes the reversals as manifestations of time inconsistency. Another abandons consequentialism, thereby avoiding the implication that revisions imply inconsistency. We test between these explanations by examining the demand for commitment, and contingent planning. The hypothesis of time-consistent non-consequentialism receives strong support.
There is significant variation in the percentage of adults registered as organ donors across the United States. Some of this variation may be due to characteristics of the sign-up process, in particular the form that is used when state residents apply for or renew their driver’s licenses. However, it is difficult to model and predict the success of the different forms with typical methods, due to the exceptionally large feature space and the limited data. To surmount this problem, I apply a methodology that uses data on subjective non-choice reactions to predict choices. I find that active (ie yes-no) framing of the designation question decreases designation rates by 2-3 percentage points relative to an opt-in framing. Additionally, I show that this methodology can predict behavior in an experimental setting involving social motives where we have good structural benchmarks. More generally, this methodology can be used to perform policy pseudo-experiments where field experiments would prove prohibitively expensive or difficult.
with B. Douglas Bernheim, Daniel Bjorkgren, and Antonio Rangel
We develop a method for determining likely responses to a change in some economic condition (e.g., a policy) for settings in which either similar changes have not been observed, or it is challenging to identify observable exogenous causes of past changes. The method involves estimating statistical relationships across decision problems between choice frequencies and variables measuring non-choice reactions, and using those relationships along with additional non-choice data to predict choice frequencies under the envisioned conditions. In an experimental setting, we demonstrate that this method yields accurate measures of behavioral responses, while more standard methods are either inapplicable or highly inaccurate.
Work in Progress
The Model You Know: Generalizability and Predictive Power of Models of Choice Under Uncertainty
with B. Douglas Bernheim, Christine Exley, and Charles Sprenger [slides]
What Drives Conspicuous Consumption?
with James Andreoni, B. Douglas Bernheim, Christine Exley, and Paul Wong [slides]
Incentives for Long-run Volunteer Behavior
with Christine Exley