EconCS @ UC Berkeley
May 7 | Milosh Drezgich on Matrix Multiplicative Updates and Some Nonzero Sum Games.
Apr 30 | Renato Paes Leme (Microsoft) on showOvercoming impossibilities in budgeted mechanism design. [ paper ]
In settings where players have a limited access to liquidity, represented in the form of budget constraints, efficiency maximization has proven to be a challenging goal. In particular, the social welfare cannot be approximated by a better factor then the number of players. Therefore, the literature has mainly resorted to Pareto-efficiency as a way to achieve efficiency in such settings. While successful in some important scenarios, in many settings it is known that either exactly one incentive-compatible auction that always outputs a Pareto-efficient solution, or that no truthful mechanism can always guarantee a Pareto-efficient outcome. Traditionally, impossibility results can be avoided by considering approximations. However, Pareto-efficiency is a binary property (is either satisfied or not), which does not allow for approximations.
In this paper we propose a new notion of efficiency, called liquid welfare. This is the maximum amount of revenue an omniscient seller would be able to extract from a certain instance. We explain the intuition behind this objective function and show that it can be 2-approximated by two different auctions. Moreover, we show that no truthful algorithm can guarantee an approximation factor better than 4/3 with respect to the liquid welfare, and provide a truthful auction that attains this bound in a special case.
Importantly, the liquid welfare benchmark also overcomes impossibilities for some settings. While it is impossible to design Pareto-efficient auctions for multi-unit auctions where players have decreasing marginal values, we give a deterministic $O(\log n)$-approximation for the liquid welfare in this setting.
This is joint work with Shahar Dobzinski
In popular culture, the friendship paradox is known as the somewhat discouraging statement that a person has less friends than her friends do. Surprisingly, this statement has some mathematical truth in it. First discovered by Feld in 1991, the friendship paradox states that in expectation a node's degree is bounded from above by the average degree of its neighbors. This phenomenon was also recently observed in various experiments in large-scale online social networks showing that a striking majority of individuals have a smaller number of friends than the average number of friends their friends have. In this talk we'll discuss the friendship paradox in networks with power law degree distributions.
In particular, we will present some analytical results that quantify the friendship paradox effect as a function of the network structure, and verify these results using online social network data.
Based on joint work with Silvio Lattanzi
Apr 16 | Eric Friedman (ICSI, UC Berkeley) on showAn Introduction to Surreal Numbers and Combinatorial Games.
Surreal numbers were invented by Conway in his well known book "On Numbers and Games" and popularized in Knuth's "Surreal Numbers: How Two Ex-Students Turned on to Pure Mathematics and Found Total Happiness". In addition to being beautiful mathematics it provides a powerful tool for understanding the so-called combinatorial games.
In this talk I will provide an introduction to the surreal numbers and their connection to combinatorial games with an eye towards applications and approximation algorithms.
The Gibbard-Satterthwaite theorem says that any non-dictatorial way of electing a winner among three or more candidates is subject to manipulation. Recently, there has been much work in finding robust quantitative versions of this theorem, in particular by Friedgut, Kalai, Keller and Nisan for elections with three candidates, and Isaksson, Kindler and Mossel for neutral functions with at least four candidates. I will talk about a quantitative version of the Gibbard-Satterthwaite theorem that holds for any number of (at least three) candidates and does not require the neutrality assumption. The proof combines several ideas from the two previous proofs and also crucially uses reverse hypercontractivity. I will also talk about coalitional manipulation: how big does a coalition typically have to be in order to be able to manipulate the outcome of the election?
This is based on joint works with Elchanan Mossel and Ariel Procaccia.
Apr 2 | Ramesh Johari (Stanford) on showMean Field Equilibria of Multiarmed Bandit Games. [ paper ]
Much of the classical work on algorithms for multiarmed bandits focuses on rewards that are stationary over time. By contrast, we study multiarmed bandit (MAB) games, where the rewards obtained by an agent also depend on how many other agents choose the same arm (as might be the case in many competitive or cooperative scenarios). Such systems are naturally nonstationary due to the interdependent evolution of agents, and in general MAB games can be intractable to analyze using typical equilibrium concepts (such as perfect Bayesian equilibrium).
We introduce a general model of multiarmed bandit games, and study a notion of equilibrium inspired by a large system approximation known as mean field equilibrium. In such an equilibrium, the proportion of agents playing the various arms, called the population profile, is assumed stationary over time; the equilibrium requires a consistency check that this stationary profile arises from the policies chosen by the agents. We focus on the equilibrium behavior induced when agents use policies that are optimal for the single agent MAB problem.
We establish three main results in the paper. First, we establish existence of an MFE under general conditions. Second, we show under a contraction condition that the MFE is unique, and that the population profile converges to it from any initial state. Finally, we show that under the contraction condition, MFE is a good approximation to the behavior of finite systems with many agents. The contraction condition requires that the agent population is sufficiently mixing and that the sensitivity of the reward function to the population profile is low enough. Through numerical experiments, we find our main insights appear to hold even when the condition is violated.
This is joint work with Ramki Gummadi and Jia Yuan Yu.
In the United States, Spring Break at the college and university level can occur from March to April, depending on term dates and holidays such as Holy Week. Typically, any seminars that meet regularly during the school year, especially those in the areas of Computer Science and Economics, are suspended for the break as many attendees would be absent.
Mar 19 | Ilan Adler (UC Berkeley) on showThe reduction of certain linear complementarity problems to bimatrix games.
It is well known that many optimization problems, ranging from linear programming to hard combinatorial problems, can be formulated as linear complementarity problems (LCP). In addition, many engineering and economics problems (such as several models of market equilibrium) can be formulated directly as LCPs. One particular problem, finding Nash equilibrium of a bimatrix game, which can be formulated as LCP, motivated the development of the elegant Lemke algorithm to solve LCPs. While the algorithm always terminates, there is no guarantee that it will process any given problem (that is, find either a solution or a certificate that no solution exists). It turned out that in general, Lemke-processable LCPs belong to the complexity class PPAD and that, quite surprisingly, finding Nash equilibrium is PPAD-complete. Thus, Lemke-processable LCPs can be formulated as bimatrix games. However, the known formulation (which is cleverly designed for any PPAD problem) is very complicated, difficult to implement, and not readily available for potential insights. In this talk, I’ll present and discuss simple reduction of Lemke-processable LCPs to bimatrix games that is easy to implement and have the potential to gain additional insights to problems (including several models of market equilibrium) for which the reduction is applicable.
We study how standard auction objectives in sponsored search markets change with refinements in the prediction of the relevance (click-through rates) of ads. We study mechanisms that optimize for a convex combination of efficiency and revenue. We show that the objective function of such a mechanism can only improve with refined (improved) relevance predictions, i.e., the search engine has no disincentive to perform these refinements. More interestingly, we show that under assumptions, refinements to relevance predictions can only improve the efficiency of any such mechanism. Our main technical contribution is to study how relevance refinements affect the similarity between ranking by virtual-value (revenue ranking) and ranking by value (efficiency ranking). Finally, we discuss implications of our results to the literature on signaling.
Mar 5 | Ioannidis Stratis (Technicolor) on showBudget Feasible Mechanisms for Experimental Design.
In the classical experimental design setting, an experimenter with a budget has access to a population of multiple potential experiment subjects. Each subject is associated with a vector of public features, as well as a cost. Conducting an experiment on a subject reveals an unknown output value to the experimenter. Typically, it is assumed that such outputs are a function of the public features. The experimenter's goal is to select which experiments to conduct, subject to her budget constraint, so that she learns this function as accurately as possible.
We initiate the study of mechanisms for experimental design when subjects are strategic and may lie about their costs. In particular, we assume that outputs are a linear function of the features, and the experimenter learns the function through linear regression. We formulate the Experimental Design Problem (EDP) as finding a set of subjects that maximize the experimenter's information gain, an objective also known as the D-optimality criterion. We present the first known deterministic, polynomial time, truthful, budget feasible mechanism for EDP. Our mechanism yields a constant factor approximation (~12.98), and we show that no truthful, budget-feasible algorithms are possible within a factor 2 approximation. We also show how to generalize our approach to a wide class of learning problems.
This is joint work with Thibaut Horel and Muthu Muthukrishnan.
I will present a fairly general game-theoretic model of opinion formation in social networks where opinions evolve in response to opinions of the neighbors. In this model, nodes form their opinions by maximizing agreements with friends weighted by the strength of their relationships.
We consider the equilibrium of this process and characterize existence, uniqueness and efficiency with respect to a global objective. Our results provide general inefficiency bounds characterized by how players model their value for agreement. A simple lower bound construction exhibits that the bounds we obtain are exact. Our results generalize recent work of Bindel et al.,FOCS 2011.
This talk is based on joint work with Sreenivas Gollapudi (Microsoft Research Search Labs), Kamesh Munagala (Duke University) that will appear in STOC 2013.
In many models of multi-agent interaction -- whether in distributed protocols, markets, politics, etc -- participants repeatedly pick a "best response" to the others' actions. This foundational "best-response dynamics" model has been the focus of decades of game theory work, along with more sophisticated notions of long-term strategies. But for many real-life settings, like internet protocols and global markets, the agents are more handicapped than even in classical best response: agents often act asynchronously, i.e. before learning about some of the other agents' recent actions. In most kinds of distributed systems, removing this asynchrony would require significant, often impractical engineering costs. At this natural meeting point of the game theory tradition and distributed systems practice, we tackle these questions:
(1) What properties characterize systems where best-response dynamics are guaranteed to always converge to an equilibrium, even under asynchrony? (2) What is the (computational and communication) complexity of checking a system for guaranteed convergence?
We show that, in general, verifying guaranteed convergence is intractable. In fact, our main negative result establishes that this task is undecidable. We exhibit, in contrast, positive results for several environments of interest, including complete, computationally-tractable, characterizations of convergent systems in two natural settings: multi-party interactions with binary strategy choices per agent, and 2-party interactions with arbitrarily large strategy spaces. This allows both practical algorithms for verifying convergence, with independently-interesting application to tractably verifying convergence of asynchronous Boolean circuits with feedback.
This is joint work with Roee Engelberg, Michael Schapira, and David Wajc.
Feb 12 | Udi Weinsberg (Technicolor Research, Palo Alto) on showTowards Building a Privacy-Preserving Recommender System.
In this talk I will show how easy it is for a recommender system to learn private information from user ratings, and discuss methods for helping users preserve their privacy while getting accurate recommendations. I will discuss inference and obfuscation schemes, some theoretical bounds and present a crypto-based system for performing ridge regression without revealing private data.
Feb 5 | C. Matthew Leister (UC Berkeley) on showTrading Networks and Equilibrium Intermediation. [ paper ]
We consider a network of intermediaries facilitating exchange between a buyer and a seller. Intermediary traders face a private trading cost, a network characterizes the set of feasible transactions, and an auction mechanism sets prices. We investigate stable and equilibrium network configurations. Bottlenecks in the trading network arise naturally in a free-entry competitive equilibrium and equilibrium markets organize into an asymmetric structure with many traders near the buyer and fewer traders near the seller. Such asymmetries can render equilibrium markets fragile by amplifying the shocks experienced by key intermediaries in the economy.
Jan 29 | Rafael Frongillo (UC Berkeley) on showGeneral Characterizations of Truthfulness via Convex Analysis. [ paper ]
Joint work with Ian Kash: I will present a new model of truthful elicitation which generalizes and extends both mechanisms and scoring rules. Our main result is a characterization theorem using the tools of convex analysis, yielding characterizations of mechanisms and scoring rules as special cases. Moreover, I'll show how a variety of existing results in the mechanism design literature are often simpler and more direct when first phrased in terms of convexity. I will wrap up with a few interesting generalizations and applications to existing problems.
Jan 22 | Valerio Capraro (U Southampton) on showA Solution Concept for Games with Altruism and Cooperation. [ paper ] [ slides ]
We propose a new solution concept for one-shot normal form games formalising a principle that can be summarised as follows: players forecast how the game would be played if they formed coalitions and then they play according to their best forecast. We show that this "cooperative equilibrium" exists for all finite games and it meets the experimental data collected for the Traveler’s dilemma, the Prisoner’s dilemma, Nash bargaining problem, Bertrand competition, public good game, ultimatum game, Hawk-Dove, and other specific games for which other solution concepts fail to predict human play.