I am an Assistant Professor of Economics at the University of Pennsylvania. (I am on leave at MIT for Spring, 2020.)
My research is in economic theory (in particular, learning and information), and the application of machine learning methods for model building and evaluation.
The Ronald O. Perelman Center (Office 501)
133 South 36th Street
Philadelphia, PA 19104
Complementary Information and Learning Traps, joint with Xiaosheng Mu
Quarterly Journal of Economics, Vol. 135 (1), Pages 389-448, February 2020
Abstract: We develop a model of social learning from complementary information: Short-lived agents sequentially choose from a large set of flexibly correlated information sources for prediction of an unknown state, and information is passed down across periods. Will the community collectively acquire the best kinds of information? Long-run outcomes fall into one of two cases: (1) efficient information aggregation, where the community eventually learns as fast as possible; (2) "learning traps," where the community gets stuck observing suboptimal sources and information aggregation is inefficient. Our main results identify a simple property of the underlying informational complementarities that determines which occurs. In both regimes, we characterize which sources are observed in the long run and how often.
Predicting and Understanding Initial Play, joint with Drew Fudenberg
Abstract: We use machine learning to uncover regularities in the initial play of matrix games. We first train a prediction algorithm on data from past experiments. Examining the games where our algorithm predicts correctly, but existing economic models don’t, leads us to add a parameter to the best performing model that improves predictive accuracy. We then observe play in a collection of new “algorithmically-generated” games, and learn that we can obtain even better predictions with a hybrid model that uses a decision tree to decide game-by-game which of two economic models to use for prediction.
American Economic Review, Vol. 109 (12), Pages 4112-4141, December 2019
Journal of Economic Theory, Vol. 179, Pages 275-311, January 2019
Abstract: Suppose an analyst observes inconsistent choices from either a single decision-maker, or a population of agents. Can the analyst determine whether this inconsistency arises from choice error (imperfect maximization of a single preference) or from preference heterogeneity (deliberate maximization of multiple preferences)? I model choice data as generated from imperfect maximization of a small number of preferences. The main results show that (a) simultaneously minimizing the number of inferred preferences and the number of unexplained observations can exactly recover the number of underlying preferences with high probability; (b) simultaneously minimizing the richness of the set of preferences and the number of unexplained observations can exactly recover the choice implications of the decision maker's underlying preferences with high probability.
Data Linkages and Incentives, joint with Erik Madsen (Feb, 2020)
Abstract: Many firms, such as banks and insurers, condition their level of service on a consumer's perceived "quality," for instance their creditworthiness. Increasingly, firms have access to consumer segmentations derived from auxiliary data on behavior, and can link outcomes across individuals in a segment for prediction. How does this practice affect consumer incentives to exert (socially-valuable) effort, e.g. to repay loans? We show that the impact of a linkage on behavior depends crucially on whether the linkage reflects quality (via correlations in types) or a shared circumstance (via common shocks to observed outcomes).
Dynamically Aggregating Diverse Information, joint with Xiaosheng Mu and Vasilis Syrgkanis (April, 2020)
Abstract: An agent has access to multiple information sources, each of which provides information about a different attribute of an unknown state. Information is acquired continuously---where the agent chooses both which sources to sample from, and also how to allocate attention across them---until an endogenously chosen time, at which point a decision is taken. We provide an exact characterization of the optimal information acquisition strategy for settings where the attributes are not too strongly correlated. We then apply this characterization to derive new results regarding: (1) endogenous information acquisition for binary choice, and (2) strategic information provision by competing news sources.
Measuring the Completeness of Theories, joint with Drew Fudenberg, Jon Kleinberg and Sendhil Mullainathan (Jan, 2020)
Abstract: To evaluate how well economic models predict behavior it is important to have a measure of how well any theory could be expected to perform. We provide a measure of the amount of predictable variation in the data that a theory captures, which we call its "completeness." We evaluate the completeness of leading theories in three applications---assigning certainty equivalents to lotteries, initial play in games, and human generation of random sequences---and show that this approach reveals new insights. We also illustrate how and why our completeness measure varies with the experiments considered, for example with the choice of lotteries used to evaluate risk preferences, and explain how our completeness measure can help guide the development of new theories.
Games of Incomplete Information Played by Statisticians (March, 2018)
Abstract: This paper proposes a foundation for heterogeneous beliefs in games, in which disagreement arises not because players observe different information, but because they learn from common information in different ways. Players may be misspecified, and may moreover be misspecified about how others learn. The key assumption is that players nevertheless have some common understanding of how to interpret the data; formally, players have common certainty in the predictions of a class of learning rules. The common prior assumption is nested as the special case in which this class is a singleton. The main results characterize which rationalizable actions and Nash equilibria can be predicted when agents observe a finite quantity of data, and how much data is needed to predict various solutions. This number of observations needed depends on the degree of strictness of the solution and speed of common learning.
Abstract: A decision-maker (DM) faces an intertemporal decision problem, where his payoff depends on actions taken across time as well as on an unknown Gaussian state. The DM can learn about the state from different (correlated) information sources, and allocates a budget of samples across these sources each period. A simple information acquisition strategy for the DM is to neglect dynamic considerations and allocate samples myopically. How inefficient is this strategy relative to the optimal information acquisition strategy? We show that if the budget of samples is sufficiently large then there is no inefficiency: myopic information acquisition is exactly optimal.