I am an Assistant Professor of Economics at the University of Pennsylvania.
My research is in economic theory (in particular, learning and information), and the application of machine learning methods for model building and evaluation.
The Ronald O. Perelman Center (Office 501)
133 South 36th Street
Philadelphia, PA 19104
Complementary Information and Learning Traps, joint with Xiaosheng Mu
Quarterly Journal of Economics, Vol. 135 (1), Pages 389-448, February 2020
Abstract: We develop a model of social learning from complementary information: Short-lived agents sequentially choose from a large set of flexibly correlated information sources for prediction of an unknown state, and information is passed down across periods. Will the community collectively acquire the best kinds of information? Long-run outcomes fall into one of two cases: (1) efficient information aggregation, where the community eventually learns as fast as possible; (2) "learning traps," where the community gets stuck observing suboptimal sources and information aggregation is inefficient. Our main results identify a simple property of the underlying informational complementarities that determines which occurs. In both regimes, we characterize which sources are observed in the long run and how often.
Predicting and Understanding Initial Play, joint with Drew Fudenberg
Abstract: We use machine learning to uncover regularities in the initial play of matrix games. We first train a prediction algorithm on data from past experiments. Examining the games where our algorithm predicts correctly, but existing economic models don’t, leads us to add a parameter to the best performing model that improves predictive accuracy. We then observe play in a collection of new “algorithmically-generated” games, and learn that we can obtain even better predictions with a hybrid model that uses a decision tree to decide game-by-game which of two economic models to use for prediction.
American Economic Review, Vol. 109 (12), Pages 4112-4141, December 2019
Journal of Economic Theory, Vol. 179, Pages 275-311, January 2019
Abstract: Suppose an analyst observes inconsistent choices from either a single decision-maker, or a population of agents. Can the analyst determine whether this inconsistency arises from choice error (imperfect maximization of a single preference) or from preference heterogeneity (deliberate maximization of multiple preferences)? I model choice data as generated from imperfect maximization of a small number of preferences. The main results show that (a) simultaneously minimizing the number of inferred preferences and the number of unexplained observations can exactly recover the number of underlying preferences with high probability; (b) simultaneously minimizing the richness of the set of preferences and the number of unexplained observations can exactly recover the choice implications of the decision maker's underlying preferences with high probability.
Data Sharing and Incentives, joint with Erik Madsen (Nov, 2019)
Abstract: Many organizations, such as banks and insurers, determine what services to offer based on a perceived quality of the recipient, e.g. their creditworthiness. With new access to detailed data on individual consumers, organizations are increasingly estimating quality not only from a given consumer's interactions with the organization, but also from interactions with comparable individuals. What are the consequences for consumer incentives to exert effort in their interactions with the firm, e.g. to maintain a good credit rating? To answer this question, we study a multiple-agent career concerns model in which agents choose whether to interact with a principal, who provides a service and aggregates data across all participating agents. Individuals' interactions create an informational externality on others, shaping participation rates and effort provision in equilibrium. We show that whether data sharing is welfare-improving depends crucially on how the actions of individuals affect inferences about related consumers, specifically on whether information across consumers is “complementary" or “substitutable."
Dynamically Aggregating Diverse Information, joint with Xiaosheng Mu and Vasilis Syrgkanis (July, 2019)
Abstract: An agent has access to multiple data sources, each of which provides information about a different attribute of an unknown state. Information is acquired continuously--where the agent chooses both which sources to sample from, and also how to allocate resources across them--until an endogenously chosen time, at which point a decision is taken. We show that the optimal information acquisition strategy proceeds in stages, where resource allocation is constant over a fixed set of providers during each stage, and at each stage a new provider is added to the set. We additionally apply this characterization to derive results regarding: (1) endogenous information acquisition in a binary choice problem, and (2) equilibrium information provision by competing news sources.
Measuring the Completeness of Theories, joint with Drew Fudenberg, Jon Kleinberg and Sendhil Mullainathan (Sept, 2019)
Abstract: We use machine learning to provide a tractable measure of the amount of predictable variation in the data that a theory captures, which we call its "completeness." We apply this measure to three problems: assigning certain equivalents to lotteries, initial play in games, and human generation of random sequences. We discover considerable variation in the completeness of existing models, which sheds light on whether to focus on developing better models with the same features or instead to look for new features that will improve predictions. We also illustrate how and why completeness varies with the experiments considered, which highlights the role played in choosing which experiments to run.
Games of Incomplete Information Played by Statisticians (March, 2018)
Abstract: This paper proposes a foundation for heterogeneous beliefs in games, in which disagreement arises not because players observe different information, but because they learn from common information in different ways. Players may be misspecified, and may moreover be misspecified about how others learn. The key assumption is that players nevertheless have some common understanding of how to interpret the data; formally, players have common certainty in the predictions of a class of learning rules. The common prior assumption is nested as the special case in which this class is a singleton. The main results characterize which rationalizable actions and Nash equilibria can be predicted when agents observe a finite quantity of data, and how much data is needed to predict various solutions. This number of observations needed depends on the degree of strictness of the solution and speed of common learning.
Abstract: A decision-maker (DM) faces an intertemporal decision problem, where his payoff depends on actions taken across time as well as on an unknown Gaussian state. The DM can learn about the state from different (correlated) information sources, and allocates a budget of samples across these sources each period. A simple information acquisition strategy for the DM is to neglect dynamic considerations and allocate samples myopically. How inefficient is this strategy relative to the optimal information acquisition strategy? We show that if the budget of samples is sufficiently large then there is no inefficiency: myopic information acquisition is exactly optimal.