I am an Assistant Professor of Economics at the University of Pennsylvania.
My research is in economic theory (in particular, learning and information), and the application of machine learning methods for model building and evaluation.
The Ronald O. Perelman Center (Office 501)
133 South 36th Street
Philadelphia, PA 19104
Complementary Information and Learning Traps, joint with Xiaosheng Mu
Quarterly Journal of Economics, Vol. 135 (1), Pages 389-448, February 2020
Abstract: We develop a model of social learning from complementary information: Short-lived agents sequentially choose from a large set of flexibly correlated information sources for prediction of an unknown state, and information is passed down across periods. Will the community collectively acquire the best kinds of information? Long-run outcomes fall into one of two cases: (1) efficient information aggregation, where the community eventually learns as fast as possible; (2) "learning traps," where the community gets stuck observing suboptimal sources and information aggregation is inefficient. Our main results identify a simple property of the underlying informational complementarities that determines which occurs. In both regimes, we characterize which sources are observed in the long run and how often.
Predicting and Understanding Initial Play, joint with Drew Fudenberg
Abstract: We use machine learning to uncover regularities in the initial play of matrix games. We first train a prediction algorithm on data from past experiments. Examining the games where our algorithm predicts correctly, but existing economic models don’t, leads us to add a parameter to the best performing model that improves predictive accuracy. We then observe play in a collection of new “algorithmically-generated” games, and learn that we can obtain even better predictions with a hybrid model that uses a decision tree to decide game-by-game which of two economic models to use for prediction.
American Economic Review, Vol. 109 (12), Pages 4112-4141, December 2019
Journal of Economic Theory, Vol. 179, Pages 275-311, January 2019
Abstract: Suppose an analyst observes inconsistent choices from either a single decision-maker, or a population of agents. Can the analyst determine whether this inconsistency arises from choice error (imperfect maximization of a single preference) or from preference heterogeneity (deliberate maximization of multiple preferences)? I model choice data as generated from imperfect maximization of a small number of preferences. The main results show that (a) simultaneously minimizing the number of inferred preferences and the number of unexplained observations can exactly recover the number of underlying preferences with high probability; (b) simultaneously minimizing the richness of the set of preferences and the number of unexplained observations can exactly recover the choice implications of the decision maker's underlying preferences with high probability.
Data Linkages and Incentives, joint with Erik Madsen (Jan, 2020)
Abstract: Many organizations, such as banks and insurers, determine what services to offer based on a perceived quality of the recipient, e.g. their creditworthiness. Increasingly, organizations have access to new data about consumers, such as categorizations into demographic and lifestyle segments. When organizations learn about a consumer's quality from the behavior of other consumers in the same segment--creating data linkages--what are the consequences for each consumer's incentives to exert effort, e.g. to maintain a good credit rating? We study a multiple-agent career concerns model in which agents choose whether to interact with a principal and how much costly effort to exert. Data linkages create informational externalities across consumers, shaping participation rates and effort provision in equilibrium. We show that whether these are welfare-improving depends crucially on whether linkages are about quality (revealing correlations in underlying types) or about a shared circumstance (helping the principal to de-bias shared shocks to observed outcomes).
Dynamically Aggregating Diverse Information, joint with Xiaosheng Mu and Vasilis Syrgkanis (July, 2019)
Abstract: An agent has access to multiple data sources, each of which provides information about a different attribute of an unknown state. Information is acquired continuously--where the agent chooses both which sources to sample from, and also how to allocate resources across them--until an endogenously chosen time, at which point a decision is taken. We show that the optimal information acquisition strategy proceeds in stages, where resource allocation is constant over a fixed set of providers during each stage, and at each stage a new provider is added to the set. We additionally apply this characterization to derive results regarding: (1) endogenous information acquisition in a binary choice problem, and (2) equilibrium information provision by competing news sources.
Measuring the Completeness of Theories, joint with Drew Fudenberg, Jon Kleinberg and Sendhil Mullainathan (Jan, 2020)
Abstract: To evaluate how well economic models predict behavior it is important to have a measure of how well any theory could be expected to perform. We provide a measure of the amount of predictable variation in the data that a theory captures, which we call its "completeness." We evaluate the completeness of leading theories in three applications---assigning certainty equivalents to lotteries, initial play in games, and human generation of random sequences---and show that this approach reveals new insights. We also illustrate how and why our completeness measure varies with the experiments considered, for example with the choice of lotteries used to evaluate risk preferences, and explain how our completeness measure can help guide the development of new theories.
Games of Incomplete Information Played by Statisticians (March, 2018)
Abstract: This paper proposes a foundation for heterogeneous beliefs in games, in which disagreement arises not because players observe different information, but because they learn from common information in different ways. Players may be misspecified, and may moreover be misspecified about how others learn. The key assumption is that players nevertheless have some common understanding of how to interpret the data; formally, players have common certainty in the predictions of a class of learning rules. The common prior assumption is nested as the special case in which this class is a singleton. The main results characterize which rationalizable actions and Nash equilibria can be predicted when agents observe a finite quantity of data, and how much data is needed to predict various solutions. This number of observations needed depends on the degree of strictness of the solution and speed of common learning.
Abstract: A decision-maker (DM) faces an intertemporal decision problem, where his payoff depends on actions taken across time as well as on an unknown Gaussian state. The DM can learn about the state from different (correlated) information sources, and allocates a budget of samples across these sources each period. A simple information acquisition strategy for the DM is to neglect dynamic considerations and allocate samples myopically. How inefficient is this strategy relative to the optimal information acquisition strategy? We show that if the budget of samples is sufficiently large then there is no inefficiency: myopic information acquisition is exactly optimal.