I recently completed a new paper with my friend and classmate Christine Exley. The paper addresses an issue that has gotten much attention in the experimental economics literature: commitment devices.

To understand what I mean by a commitment device, let’s take a step back and talk about time preferences more generally. Suppose you are planning your activities for the coming day or week. It would be strange to limit your options of activities without some form of compensation. Yet that is exactly what we see people do in a variety of settings: They voluntarily open savings accounts with restricted access to their own cash. They choose wage contracts that are strictly dominated. They choose to set early deadlines for classroom assignments.

One possible explanation has been very popular in recent literature: people suffer from present-bias, which essentially means they lack perfect self control. That is, they have trouble following through on activities in future that they know are good for them in the long run. A widely-used model of this problem imagines that you are playing a game with your future selves: you all agree what the best outcome is, but each version of yourself would prefer to push off the hard work to tomorrow’s version. So, it then makes sense for today’s version to force tomorrow’s version to take the action. Thus people put money in lockbox account to force their future self to save. They choose a goal-oriented wage contract or early deadlines to prevent tomorrow’s self from shirking. These are all examples of commitment devices.

The plethora of lab, field, and theoretical work on this model leaves little doubt that present bias is a real phenomena, or that it is driving commitment demand in many settings. In our paper, Christine and I note an interesting phenomenon in a field experiment: increasing observability of the commitment choice leads to greater demand for that device. If demand for commitment devices were driven by present bias alone, the observability of the level of commitment should have no effect on the choice itself.

In our experiment, students signed up for workshops at a campus center through a website that we set up. After indicating which workshop they wanted to go to, our website offered them the following prospect: If they attended the workshop they had selected, they would receive a gift card for \$15.00. If they did not attend, they would receive an amount of their choosing between \$0 and \$15.00. If they wanted to commit themselves to going (at least partially), they could choose to receive less than the full amount. And many students did, putting about \$5.00 on the line on average. This was our private treatment.

For a second treatment group, we then made a very simple change: The amount they choose to forgo would be made public to the people running the workshop. This small change increased the average commitment choice to almost \$9.00 on average. Our hypothesis is that at least some of the students in our experiment chose to commit themselves in order to signal something about themselves: they were interested in the center’s mission, or that they knew that failing to follow through with their actions was a bad thing.

The takeaway message from this field experiment is simple: We may be looking for present bias when there are other mechanisms at play. Returning to the examples I cited earlier: People may choose lockbox accounts to show others that they value saving. They may choose restrictive deadlines or goals to signal to their boss or teacher than their are dedicated and conscientious workers. When running experiments such as these, we need to be cautious about inferring that these actions are driven necessarily by time preference issues. If we are not careful, our estimates about the prevalence of present bias or the parameters of time preferences may be biased.