Information Elicitation

How can you design a contract to incentivize a self-minded agent to give you honest information? What does how we measure error say about what we are trying to predict? These questions drive the area of information elicitation, which has applications to economics, machine learning, statistics, finance, and engineering. Most of the topics below fall into this broad area, such as peer prediction, prediction markets, property elicitation, and mechanism design.

Property Elicitation

Ever since Glenn Brier sounded the warning call in 1950 that popular error measures were encouraging inaccurate meteorological forecasts, growing body of work in statistics, economics, and now computer science, has sought to design error measures which incentivize and assess forecasts. This diverse body of work goes by the name of property elicitation: the design of error measures (loss functions) that incentivize accurate statistical reports from agents or algorithms, and has applications to algorithmic economics, machine learning, finance, and engineering. My work addresses fundamental questions in this area: what statistics can be elicited (expressed as the minimizer of an expected loss function), and how many parameters are required. [Image credit: Pooran Memari]  [ related papers + ]

Prediction Markets

Markets designed for the sole purpose of gathering information from the crowd are called prediction markets. These fascinating markets have a long history dating back to markets in Rome for predicting the next pope, but modern incarnations have a very different form. I study automated market makers which are centralized agents subsidized by the market which offer to buy or sell any security, for a price which updates automatically given the history of trade. Here the central questions are how to guarantee a bounded loss for the market maker, while preserving the quality of information and adapting to market conditions and external events.  [ related papers + ]

Peer Prediction

How can you incentivize someone to tell the truth if you have no way of verifying their information, and never will? This is the central question of peer prediction, whose goal is to design mechanisms that bootstrap these incentives based on several agents' responses to the same query. The field derives its name from the intuition of scoring an agent's report based on how well it "predicts" another's. Peer prediction mechanisms are notoriously brittle and unstable, and recent work has sought to understand why, and design mechanisms which are more robust. Image: one proposed robust mechanism from [31, 27].  [ related papers + ]

Dynamical Systems

In many real-world mathematical modeling applications, one eventually arrives at a complex dynamical system, a set of equations describing how the state of the system changes with time. How can we harness the ever-increasing computational power at our fingertips to make rigorous statements about these systems? For instance, one may wish to know the number of low-period orbits, or a lower bound on topological entropy. In particular, we would like a completely automated approach to producing these rigorous statements. My work in dynamics focuses on building such an approach using a combination of simple interval arithmetic to bound computational errors, and very powerful tools from computational topology, namely the discrete Conley index.  [ related papers + ]