Sign In

Communications of the ACM

Review articles

Algorithmic Game Theory

View as: Print Mobile App ACM Digital Library Full Text (PDF) In the Digital Edition Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
geometric shapes and patterns

Credit: Celia Johnson

The widespread adoption of the Internet and the emergence of the Web changed society's relationship with computers. The primary role of a computer evolved from a stand-alone, well-understood machine for executing software to a conduit for global communication, content-dissemination, and commerce. The algorithms and complexity theory community has responded to these changes by formulating novel problems, goals, and design and analysis techniques relevant for modern applications.

Game theory, which has studied deeply the interaction between competing or cooperating individuals, plays a central role in these new developments. Research on the interface of theoretical computer science and game theoryan area now known as algorithmic game theory (AGT)has exploded over the past 10 years. The primary research themes in AGT differ from those in classical microeconomics and game theory in important, albeit predictable, respects. Firstly in application areas: Internet-like networks and nontraditional auctions motivate much of the work in AGT. Secondly in its quantitative engineering approach: AGT research typically models applications via concrete optimization problems and seeks optimal solutions, impossibility results, upper and lower bounds on feasible approximation guarantees, and so on. Finally, AGT usually adopts reasonable (for example, polynomial-time) computational complexity as a binding constraint on the feasible behavior of system designers and participants. These themes, which have played only a peripheral role in traditional game theory, give AGT its distinct character and relevance.

Here, we touch on the current dominant research trends in AGT, loosely following the organization of the first book in the field.30 We focus on contributions of the algorithms and complexity theory community; see two recent articles in Communications18,40 and the references therein for alternative perspectives on computer science and game theory.

Back to Top

Algorithmic Mechanism Design

Algorithmic mechanism design studies optimization problems where the underlying datasuch as the values of goods and costs of performing a taskis initially unknown to the algorithm designer, and must be implicitly or explicitly elicited from self-interested participants. Auction settings are canonical examples, where the private data is the willingness to pay of the bidders for the goods on sale, and the optimization problem is to allocate the goods to maximize some objective, such as revenue or overall value to society. A "mechanism" is a protocol that interacts with participants and determines a solution to the underlying optimization problem.

There is a complex dependence between the way a mechanism employs elicited data and participant behavior. For example, consider the sale of a single good in a sealed-bid auction with several bidders. In a "first-price" auction, the selling price is the bid of the winner (that is, the maximum bid). Bidders naturally shade their bids below their maximum willingness to pay in first-price auctions, aspiring to achieve the lowest-possible price subject to winning the auction. Determining how much to shade requires guessing about the behavior of the other bidders. A different auction is the "second-price" auction, in which the selling price is only the second-highest bid. A famous result of Vickrey43 is that every participant of a second-price auction may as well bid its true value for the good: intuitively, a second-price auction optimally shades the bid of the winner on its behalf, to the minimum alternative winning bid. eBay and Amazon auctions are similar to second-price auctions in many (but not all) respects; see Steiglitz42 for a detailed discussion. Keyword search auctions, such as those run by Google, Yahoo!, and Microsoft, are more complex variants of second-price auctions with multiple heterogeneous goods, corresponding to the potential ad slots on a search results page. Lahaie et al.30 provide an overview of theoretical work on search auctions.

While the economic literature on mechanism design is quite mature,20 computer scientists have initiated a number of new research directions. We concentrate here on the emphasis in algorithmic mechanism design on complexity bounds and worst-case approximation guarantees, as first proposed by Nisan and Ronen.29 Additional aspects including prior-free revenue-maximization, distributed (or Internet-suitable) mechanism design, and online (or real time) mechanism design are discussed in Nisan et al.30

The technical core of this part of algorithmic mechanism design is the following deep question:

(Q1) To what extent is "incentive-compatible" efficient computation fundamentally less powerful than "classical" efficient computation?

To translate this question into mathematics, reconsider the Vickrey (second-price) auction for selling a single good. Each bidder i has a private willingness-to-pay vi and submits to the auctioneer a bid bi. The auction comprises two algorithms: an allocation algorithm, which picks a winner, namely the highest bidder; and a payment algorithm, which uses the bids to charge payments, namely 0 for the losers and the second-highest bid for the winner. We argued intuitively that this auction is truthful in the following sense: for every bidder i and every set of bids by the other participants, bidder i maximizes its "net value" (its value for the good, if received, minus its payment, if any) by bidding its true private value: bi = vi. Moreover, no false bid is as good as the truthful bid for all possible bids by the other participants. Assuming all bidders bid truthfully (as they should), the Vickrey auction solves the social welfare maximization problem, in the sense that the good is allocated to the participant with the highest value for it.

More generally, an allocation algorithm x is implementable if, for a judiciously chosen payment algorithm , coupling x with yields a truthful mechanism: every participant is guaranteed to maximize its payoff by reporting its true preferences. For a single-good auction, the "highest-bidder" allocation algorithm is implementable (as we have seen); the "second-highest bidder" allocation algorithm is not (a straightforward exercise). Thus some but not all algorithms are implementable.

We can mathematically phrase the question (Q1) as follows: Are implementable algorithms less powerful than arbitrary algorithms for solving fundamental optimization problems?

Understanding this question involves two interrelated goals: characterization theorems and approximation bounds.

(G1) Usefully characterize the implementable allocation algorithms for an optimization problem.

(G2) Prove upper and lower bounds on the best-possible solution quality achieved by an implementable algorithm, possibly subject to additional constraints such as polynomial running time.

The second goal quantifies the limitations of implementable algorithms via an approximation measure; the most commonly used such measure is the worst-case ratio, over all possible inputs, between the objective function value of the algorithm's solution and the optimal objective function value. The first goal aims to reformulate the unwieldy definition of implementability into a more operational form amenable to both upper and lower approximation bounds. Both goals, and especially (G1), seem to grow more complex with the number of independent parameters required to describe the private information of a participant.

Versions of (G2) pervade modern algorithmic research: for a given "constrained computational model," where the constraint can be either computational (as for polynomial-time approximation algorithms) or information-theoretic (as for online algorithms), quantify its limitations for optimization and approximation. Goal (G1) reflects the additional difficulty in algorithmic mechanism design that even the "computational model" (of implementable algorithms) induced by strategic constraints is poorly understood. For example, determining whether or not a given algorithm is online is intuitively far easier than checking if one is implementable.

Single-Parameter Mechanism Design. This two-step approach is vividly illustrated by the important special case of single-parameter problems, where goal (G1) has been completely resolved. A mechanism design problem is single-parameter if the possible outcomes are real n-vectors and each participant i has an objective function of the form vii for a private real number vi (the "single parameter"). The numbers i and vi can be thought of as the quantity received and the value per-unit of a good, respectively. A single-item auction is the special case in which each is either a standard basis vector or the all-zero vector. Keyword search auctions are also single-parameter, under the assumptions that every advertiser cares only about the probability i of a click on its sponsored link and has a common value vi for every such click.

An algorithm for a single-parameter problem is monotone if a greater bid begets a greater allocation: increasing the value of a bid (keeping the other bids fixed) can only increase the corresponding value of the computed i. For example, the "highest bidder" allocation algorithm for a single-good auction is monotone, while the "second-highest bidder" allocation algorithm is not. In general, monotonicity characterizes implementability for single-parameter problems.

Myerson's Lemma.27 An allocation algorithm for a single-parameter mechanism design problem is implementable if and only if it is monotone.

Myerson's Lemma is a useful solution to the first goal (G1) and reduces implementable algorithm design to monotone algorithm design. For example, consider the following "rank-by-weighted bid" allocation algorithm for a keyword search auction. Advertisers' bids are sorted in decreasing order, possibly after scaling by advertiser-specific relevance factors, and ad slots are populated in this order. Assuming that the probability of a click is higher in higher slots, every such algorithm is monotone: increasing one's bid can only increase one's position in the ordering, which in turn leads to an only higher probability of a click. Thus, Myerson's Lemma guarantees an analog of the second-price rule that extends the allocation algorithm into a truthful auction.a

Despite our thorough understanding of goal (G1), question (Q1) remains open for single parameter problems. A single-parameter scheduling problem proposed by Archer and Tardos1 had been the most natural candidate for differentiating between the optimization power of monotone and arbitrary polynomial-time algorithms, but Dhangwatnotai et al.14 recently gave a (randomized) polynomial-time monotone algorithm for the problem with approximate guarantee as good as the best-possible polynomial-time algorithm (assuming P NP).

Truthful mechanisms areby designstrategically degenerate in that the best course of action of a participant does not depend on the actions taken by others.

Multiparameter Mechanism Design. Many important mechanism design problems are not single-parameter. Combinatorial auctions,11 in which each participant aims to acquire a heterogeneous set of goods and has unrelated values for different sets, are a practical and basic example. Combinatorial auctions are used in practice to sell wireless spectrum (where the goods are different licenses), with auction designs by theoretical economists generating billions of dollars of revenue over the past decade.11 Their complexity stems from "complements," meaning goods that are more useful when purchased in tandem (for example, spectrum licenses for small but adjacent regions); and "substitutes," meaning goods that are partially redundant (for example, two different but functionally identical licenses for the same region). Each bidder in a combinatorial auction has, in principle, an exponential number of private parametersone private value for each subset of goods.

Multiparameter mechanism design is complex and our current understanding of goals (G1) and (G2) is primitive for most problems of interest. There are natural optimization problems for which there is a provable gap between the best-possible worst-case approximation ratio of implementable and arbitrary polynomial-time deterministic algorithms. This fact was first proved by Lavi et al.;23 more recently, Papadimitriou et al.33 showed that this gap can be as large as a polynomial in the number of bidders. Because of its importance and abundance of open questions, multiparameter mechanism design has been a hotbed of activity over the past few years. See Roughgarden35 for a survey of the primary research threads, including upper and lower approximation bounds for polynomial-time welfare maximization for combinatorial auctions, and work toward multiparameter analogs of Myerson's Lemma.

* Quantifying Inefficiency and the Price of Anarchy

The truthful mechanisms examined earlier areby designstrategically degenerate in that the best course of action of a participant (that is, truthtelling) does not depend on the actions taken by the others. When a designer cannot specify the rules of the game and directly dictate the allocation of resourcesor when there is no central designer at alldependencies between different participants' optimal courses of action are generally unavoidable and preclude exact optimization of standard objective functions. This harsh reality motivates adopting an equilibrium concepta rigorous proposal for the possible outcomes of a game with self-interested participantsand an approximation measure that quantifies the inefficiency of a game's equilibria, to address the following basic question:

(Q2) When, and in what senses, are game-theoretic equilibria guaranteed to approximately optimize natural objective functions?

Such a guarantee implies that the benefit of imposing additional control over the system is small, and is particularly reassuring when implementing an optimal solution is infeasible (as in a typical Internet application).

Routing with Congestion. There are now numerous answers to question (Q2) in different models. We describe one by Roughgarden and Tardos,37,39 for a model of "selfish routing" originally proposed for road traffic (see Beckmann4) and subsequently adapted to communication networks (see Bertsekas and Tsitsiklis5). This was the first general approximation bound on the inefficiency of equilibria; the idea of quantifying such inefficiency was explored previously in a scheduling model.22

Consider a directed graph with fixed traffic rates between various origin-destination pairs in which the traffic chooses routes to minimize individual cost; see also Figure 1. Here, we assume that the traffic comprises a large number of selfish users, each of negligible size, such as drivers on a highway or packets in a network. Edge costs are congestion-dependent, with the continuous, nondecreasing function ce(x) denoting the per-unit cost incurred by traffic on edge e when x units of traffic use it. In an equilibrium, each user travels along a minimum-cost path from its origin to its destination, given the congestion caused by the traffic. These selfish routing games are strategically non-trivial in that the minimum-cost path for a given user generally depends on the paths chosen by the others.

For example, in a "Pigou-like network" (Figure 1a), r units of selfish traffic autonomously decide between parallel edges e1 and e2 that connect the origin s to the destination t. Suppose the second edge has some cost function c2(·), and the first edge has a constant cost function c1 everywhere equal to c2 (r). Such networks are strategically trivial, just like the truthful mechanisms noted earlier: the second edge's cost is never larger than that of the first, even when it is fully congested. For this reason, all traffic uses the second edge at equilibrium. This equilibrium does not generally minimize the average cost of all users. For example, if r = 1 and c2 (x) = x as in Figure 1a, the average cost at equilibrium is 1, while splitting the traffic equally between the two edges yields a routing with average cost 3/4. The latter traffic pattern is not an equilibrium because of a "congestion externality": a selfish network user routed on the first edge would switch to the second edge, indifferent to the fact that this switch (slightly) increases the cost incurred by a large portion of the population. Similarly, in the Braess's Paradox7 network of Figure 1b, the average cost at equilibrium is 2 (with all traffic on the zig-zag path), while a benevolent dictator could route the traffic at average cost 3/2 (by splitting traffic between the two two-hop paths).b

The price of anarchy (POA) of a selfish routing network is the ratio of the average user cost at equilibrium and in an optimal routing4/3 in both of the networks in Figure 1. The closer the POA is to 1, the lesser the consequences of selfish behavior. Replacing the cost function of the second edge in Figure 1a by c2 (x) = xd for large d shows that the POA can networks, and suggests that the POA is governed by the "degree of nonlinearity" of the cost function c2. A key result formalizes and extends this intuition to arbitrary networks: among all networks with cost functions lying in a set C (for example, bounded-degree polynomials with nonnegative coefficients), the largest-possible POA is achieved already in Pigou-like networks.37 Conceptually complex topologies do not amplify the worst-case POA. This reduction permits the easy calculation of tight bounds on the worst-case POA for most interesting sets C of cost functions. For example, the POA of every selfish routing network with affine cost functions (of the form ce(x) = aex + be for non-negative ae, be) is at most 4/3, with a matching lower bound provided by the examples in Figure 1.39 See Nisan et al.30 for a recent survey detailing these and related results.

These POA bounds provide a theoretical justification for a common rule of thumb used in network design and management: overprovisioning networks with extra capacity ensures good performance. Precisely, suppose every edge e of a network has a capacity ue and a corresponding cost function ce(x) = 1/(ue x); see Figure 2a. (If x ue, we interpret the cost as infinite.) This is the standard M/M/1 queueing delay function with service rate ue. We say that a network is -overprovisioned for isin.gif (0, 1) if, at equilibrium, at least a fraction of each edge's capacity remains unused. The following is a tight bound on the POA for such networks; the bound

Theorem (Consequence of Roughgarden37) The POA of every -overprovisioned network is at most


Thus even 10% extra capacity reduces the worst-case price of anarchy of selfish routing to roughly 2.

Further Aspects of Quantifying Inefficiency. We have barely scratched the surface of recent work on equilibrium efficiency analyses. For an overview of work on some other application domains, including resource allocation, scheduling, facility location, and network design, see Nisan et al.30

An important emerging trend in this area is to prove POA-type bounds under increasingly weak assumptions on the rationality of participants. Recall in algorithmic mechanism design, our only assumption was that participants will make use of a foolproof strategy (one that dominates all others), should one be available. Here, we implicitly assumed that selfish participants can reach an equilibrium of a game without such foolproof strategies, presumably through repeated experimentation. This much stronger assumption has been addressed in two different ways in the recent literature. The first is to formally justify it by positing natural experimentation strategies and proving that they quickly reach a (possibly approximate) equilibrium; see Chien and Sinclair9 and the references therein for a sampling of such like guarantees that apply "on average," even when such experimentation strategies fail to converge to an equilibrium. Remarkably, such approximation bounds hold in interesting classes of games, including in selfish routing networks. See Awerbuch et al.,2 Blum et al.,6 and Goemans et al.19 for initial formalizations of this approach. Roughgarden36 recently proved the general result that, under fairly weak conditions, POA bounds for equilibria extend automatically to the results of repeated experimentation.

Back to Top

Complexity of Equilibrium Computation

Equilibrium conceptsmost famously the Nash equilibrium28play a starring role in game theory and microeconomics. If nothing else, a notion of equilibrium describes outcomes that, once reached, persist under some model of individual behavior. In engineering applications we generally demand a stronger interpretation of an equilibrium, as a credible prediction of the long-run state of the system. But none of the standard equilibrium notions or the corresponding proofs of existence suggest how to arrive at an equilibrium with a reasonable amount of effort. This fact motivates the following questions.

(Q3) When can the participants of a game quickly converge to an equilibrium? More modestly, when can a centralized algorithm quickly compute an equilibrium?

These questions are interesting for two reasons. First, algorithms for equilibrium computation can be useful practically, for example in game-playing and for multi-agent reasoning.41 Second, assuming that players can invest only polynomial computation in playing a game, resolving the complexity of computing an equilibrium concept has economic implications: a polynomial-time algorithm is an important step toward establishing the concept's credibility, while an intractability result casts doubt on its predictive power.

There has been a frenzy of recent work on these questions, for many different fundamental equilibrium concepts. Perhaps the most celebrated results in the area concern the PPAD-completeness of computing mixed-strategy Nash equilibria in finite games with two or more players.8,12 To briefly convey the spirit of the area with a minimum of technical fuss, we instead discuss the complexity of converging to and computing pure-strategy Nash equilibria in a variant of the routing games discussed earlier. We then discuss the key differences between the two settings. For work on the complexity of computing other equilibrium concepts, such as market, correlated, and approximate Nash equilibria, and for a discussion of equilibrium computation in extensive-form, compact, randomly generated, and stochastic games, see Nisan30 and Roughgarden38 and the references therein.

Pure Nash Equilibria in Network Congestion Games. In the atomic variant of selfish routing, there are a finite number k of players that each control a non-negligible amount of traffic (say one unit each) and choose a single route for it. Each edge cost function ce: {1, 2, ..., k} R+, describing the per-player cost along an edge as a function of its number of users, is non-decreasing. An outcome (P1,...,Pk)a choice of a path Pi for each player iis a pure-strategy Nash equilibrium (PNE) if each player simultaneously chooses a best response: a path with minimum possible cost, given the paths chosen by the other players. For instance, consider Pigou's example (Figure 1a) with the constant cost on the upper edge raised from 1 to 2. If there are two players (with origin s and destination t), then there are three PNE: one with both players on the lower link, and two in which each link is used by a single player. In every case, a deviating player would incur cost 2 and be no better off than in the equilibrium.

Equilibrium concepts play a starring role in game theory. If nothing else, a notion of equilibrium describes outcomes that, once reached, persist under some model of individual behavior.

Best-response dynamics is a simple model of experimentation by players over time: while the current outcome is not a PNE, choose an arbitrary player that is not using a best response, and update its path to a best response. The update of one player usually changes the best responses of the others; for this reason, best-response dynamics fails to converge in many games (such as "Rock-Paper-Scissors"). In an atomic selfish routing network, however, every iteration of best-response dynamics strictly decreases the potential function


where xe denotes the number of paths Pi that contain edge e, and is thus guaranteed to terminate, necessarily at a PNE.26,34 Does convergence require polynomial or exponential time? Can we compute a PNE of such a game by other means in polynomial time?

Assume for the moment that the problem of computing a PNE of an atomic selfish routing network is not solvable in polynomial time; how would we amass evidence for this fact? An obvious idea is to prove that the problem is NP-hard. Remarkably, a short argument21,25 shows that this is possible only if NP = coNP! Intuitively, solving an NP-hard problem like satisfiability means to either exhibit a satisfying truth assignment of the given Boolean formula or to correctly determine that none exist. Computing a PNE of an atomic selfish routing game appears easier because the latter situation (of there being no PNE) can be ruled out a priorithe "only" challenge is to exhibit a solution in polynomial time.c

To motivate the definition of the appropriate complexity class, recall that problems in the class NP are characterized by short and efficiently verifiable witnesses of membership, such as satisfying truth assignments or Hamiltonian cycles. There is thus a generic "brute-force search" algorithm for NP problems: given an input, enumerate the exponentially many possible witnesses of membership, and check if any of them are valid. Computing a PNE of an atomic selfish routing game appears to be easier than an NP-hard problem because there is a guided search algorithm (namely, best-response dynamics) that navigates the set of possible witnesses and is guaranteed to terminate with a legitimate one. At worst, computing a PNE might be as hard as all problems solvable by such a guided search procedure. This is in fact the case, as we formalize here.

What are the minimal ingredients that guarantee that a problem is solvable via guided search? The answer is provided by the complexity class PLS (for "polynomial local search").21 A PLS problem is described by three polynomial-time algorithms: one to accept an instance and output an initial candidate solution; one to evaluate the objective function value of a candidate solution; and one that either verifies local optimality (for some local neighborhood) or else returns a neighboring solution with strictly better objective function value. To solve a PLS problem means to compute a local optimum, by local search or by other means. For example, computing a PNE of an atomic selfish routing game can be cast as a PLS problem by adopting the potential function as an objective function, and defining two outcomes to be neighbors if all but one player choose the same path in both. Local minima then correspond to the PNE of the game. A problem in PLS is then PLS-complete if every problem in PLS reduces to it in polynomial time, in which case the complete problem is solvable in polynomial time only if every problem in PLS is.

The problem of computing a PNE of an atomic selfish routing network is PLS-complete.17 It is therefore polynomial-time solvable if and only if P = PLS. In the spirit of the P vs. NP question, it is generally believed that P PLS but researchers seem far from a resolution in either direction. Since PLS contains several important problems that have resisted all attempts at a computationally efficient solution, PLS-hardness is viewed as strong evidence that a problem will not be solved in polynomial time (at least in the near future).

Mixed-Strategy Nash Equilibria and PPAD. A mixed strategy is a probability distribution over the pure strategies of a player. In a mixed-strategy Nash equilibrium (MNE), every player simultaneously chooses a mixed strategy maximizing its expected payoff, given those chosen by the others. For example, in "Rock-Paper-Scissors," with each player receiving payoff 1 for a win, 0 for a draw, and -1 for a loss, the only MNE has each player randomizing uniformly over its three strategies to obtain an expected payoff of 0. Nash proved that every game with a finite number of players and strategies has at least one MNE.28 Computing an MNE of a finite game is a central equilibrium computation problem.

We focus on the two-player ("bimatrix") case, where the input is two m × n payoff matrices (one for each player) with integer entries; with three or more players, the problem appears to be harder in a precise complexity-theoretic sense.15 We emphasize that the two payoff matrices are completely unrelated, and need not be "zero-sum" like in Rock-Paper-Scissors. (When the two payoff matrices sum to a constant matrix, an MNE can be computed in polynomial time via linear programming; see for example, Nisan30 for details.)

There is a non-obvious "guided search" algorithm for two-player games called the Lemke-Howson algorithm;24 see von Stengel30 for a careful exposition. This algorithm is a path-following algorithm in the spirit of local search, but it is not guided by an objective or potential function and thus does not prove that computing an MNE of a bimatrix game is in PLS. In conjunction with our earlier reasoning, however, the Lemke-Howson algorithm shows that the problem is not NP-hard unless NP = coNP.25

A complexity class that is related to but apparently different from PLS is PPAD, which stands for "polynomial parity argument, directed version". This class was defined in Papadimitrou32 to capture the complexity of computing MNE and related problems, such as computing approximate Brouwer fixed points. Its formal definition parallels that of PLS, with a PPAD problem consisting of the minimal ingredients necessary to execute a Lemke-Howson-like path-following procedure (again easily phrased as three polynomial-time algorithms). A problem in PPAD is PPAD-complete if every problem in PPAD reduces to it in polynomial time; the complete problem is then polynomial-time solvable only if all problems in PPAD are. Since PPAD contains several well-studied problems that are not known to be solvable via a polynomial-time algorithm, a proof of PPAD-completeness can be interpreted as a significant intractability result.

A few years ago, the problem of computing an MNE of a bimatrix game was shown to be PPAD-complete.8, 12 Thus, if P PPAD, there is no general-purpose and computationally efficient algorithm for this problem, and in particular there is no general and tractable way for players to reach a Nash equilibrium in a reasonable amount of time. This hardness result casts doubt on the predictive power of the Nash equilibrium concept in arbitrary games. See Chen8 and Daskalakis et al.12 for the details of this tour de force result and Daskalakis et al.13 for a high-level survey of the proof.

Back to Top

Future Directions

The rapid rate of progress in algorithmic game theory has been nourished by deep connections with other areas of theoretical computer science and a consistent infusion of new motivating applications. There remains a surfeit of important open research directions across all three of the AGT areas surveyed here, such as developing theory for the design and analysis of mechanisms for multi-parameter problems, for minimizing the inefficiency of equilibria (for example, via a mediating network protocol), and for the computation of approximate equilibria. See Roughgarden35 and the concluding sections of many chapters in Nisan30 for more details and many concrete open problems.

A broad challenge, mentioned also in Shoham's recent Communications article,40 is to develop more appropriate models of agent behavior. All of the results described in this article, even the welfare guarantee of the simple second-price auction, depend on some kind of behavioral assumptions about the participants. Such assumptions are required to address modern applications, yet are largely foreign to the theoretical computer science mindset, which is characterized by minimal assumptions and worst-case analysis. But a number of new types of worst-case guarantees, coupled with novel behavioral models, have already begun to sprout in the AGT literature. For example: mechanism implementation in undominated strategies3 and in ex post collusion-proof Nash equilibrium;31 the price of total anarchy;6,36 and the complexity of unit-recall games.16 We expect these are only the vanguard of what promises to be a rich and relevant theory.

Back to Top


1. Archer, A. and Tardos, É. Truthful mechanisms for one-parameter agents. FOCS '01, 482491.

2. Awerbuch, B., Azar, Y., Epstein, A., Mirrokni, V.S., and Skopalik, A. Fast convergence to nearly optimal solutions in potential games. EC '08, 264273.

3. Babaioff, M., Lavi, R., and Pavlov, E. Single-value combinatorial auctions and algorithmic implementation in undominated strategies. JACM 56, 1 (2009).

4. Beckmann, M.J., McGuire, C.B., and Winsten, C.B. Studies in the Economics of Transportation. Yale University Press, 1956.

5. Bertsekas, D.P. and Tsitsiklis, J.N. Parallel and Distributed Computation: Numerical Methods. Prentice-Hall, 1989.

6. Blum, A., Hajiaghayi, M., Ligett, K., and Roth, A. Regret minimization and the price of total anarchy. STOC '08, 373382.

7. Braess, D. Über ein Paradoxon aus der Verkehrsplanung. Unternehmensforschung, 12 (1968), 258268.

8. Chen, X., Deng, X., and Teng, S.-H. Settling the complexity of computing two-player Nash equilibria. JACM 56, 3 (2009).

9. Chien, S. and Sinclair, S. Convergence to approximate Nash equilibria in congestion games. SODA '07, 169178.

10. Cohen, J.E. and Horowitz, P. Paradoxical behavior of mechanical and electrical networks. Nature 352, 8 (1991), 699701.

11. Cramton, P., Shoham, Y., and Steinberg, R., Eds. Combinatorial Auctions. MIT Press, 2006.

12. Daskalakis, C., Goldberg, P.W., and Papadimitriou, C.H. The complexity of computing a Nash equilibria. SIAM Journal on Computing 39, 1 (2009), 195259.

13. Daskalakis, C., Goldberg, P.W., and Papadimitriou, C.H. The complexity of computing a Nash equilibrium. Commun. ACM 52, 2 (Feb. 2009) 8997.

14. Dhangwatnotai, P., Dobzinski, S., Dughmi, S., and Roughgarden, T. Truthful approximation schemes for single-parameter agents. FOCS '08, 1524.

15. Etessami, K. and Yannakakis, M. On the complexity of Nash equilibria and other fixed points. SIAM Journal on Computing 39, 6 (2010) 25312597.

16. Fabrikant, A. and Papadimitriou, C.H. The complexity of game dynamics: BGP oscillations, sink equlibria, and beyond. SODA '08, 844853.

17. Fabrikant, A., Papadimitriou, C.H., and Talwar, K. The complexity of pure Nash equilibria. STOC '04, 604612.

18. Feigenbaum, J., Parkes, D.C., and Pennock, D.M. Computational challenges in e-commerce. Commun. ACM 52, 1 (Jan. 2009), 7074.

19. Goemans, M.X., Mirrokni, V.S., and Vetta, A. Sink equilibria and convergence. FOCS '05, 142151.

20. Jackson, M.O. A crash course in implementation theory. Social Choice and Welfare 18, 4 (2001), 655708.

21. Johnson, D.S., Papadimitriou, C.H., and Yannakakis, M. How easy is local search? J. Computer and System Sciences 37, 1 (1988), 79100.

22. Koutsoupias, E. and Papadimitriou, C.H. Worst-case equilibria. STACS '99, 404413.

23. Lavi, R., Mu'alem, A., and Nisan, N. Towards a characterization of truthful combinatorial auctions. FOCS '03, 574583.

24. Lemke, C.E. and Howson, J.T., Jr. Equilibrium points of bimatrix games. SIAM J. 12, 2 (1964), 413423.

25. Megiddo, N. and Papadimitriou, C.H. On total functions, existence theorems and computational complexity. Theoretical Computer Science 81, 2 (1991), 317324.

26. Monderer, D. and Shapley, L.S. Potential games. Games and Economic Behavior 14, 1 (1996), 124143.

27. Myerson, R. Optimal auction design. Mathematics of Operations Research 6, 1 (1981), 5873.

28. Nash, J.F., Jr. Equilibrium points in N-person games. In Proceedings of the National Academy of Science 36, 1 (1950), 4849.

29. Nisan, N. and Ronen, A. Algorithmic mechanism design. Games and Economic Behavior 35, 12 (2001), 166196.

30. Nisan, N., Roughgarden, T., Tardos, É., and Vazirani, V.V., Eds. Algorithmic Game Theory. Cambridge University Press, 2007.

31. Nisan, N., Schapira, M., Valiant, G., and Zohar, A. Best-reply mechanisms. Working paper, 2009.

32. Papadimitriou, C.H. On the complexity of the parity argument and other inefficient proofs of existence. J. Computer and System Sciences 48 3 (1994), 498532.

33. Papadimitriou, C.H., Schapira, M., and Singer, Y. On the hardness of being truthful. FOCS '08, 250259.

34. Rosenthal, R.W. A class of games possessing pure-strategy Nash equilibria. International J. Game Theory 2, 1 (1973), 6567.

35. Roughgarden, T. Algorithmic game theory: Some greatest hits and future directions. TCS '08, 2142.

36. Roughgarden, T. Intrinsic robustness of the price of anarchy. STOC '09, 513522.

37. Roughgarden, T. The price of anarchy is independent of the network topology. J. Computer and System Sciences 67, 2 (2003), 341364.

38. Roughgarden, T. Computing equilibria: A computational complexity perspective. Economic Theory 42, 1 (Jan. 2010), 193236.

39. Roughgarden, T. and Tardos, É. How bad is selfish routing? JACM 49, 2 (2002), 236259.

40. Shoham, Y. Computer science and game theory. Commun. ACM 51, 8 (Aug. 2008), 7579.

41. Shoham, Y. and Leyton-Brown, K. Multiagent Systems: Algorithmic, Game Theoretic and Logical Foundations. Cambridge University Press, 2008.

42. Steiglitz, K. Snipers, Shills, and Sharks: eBay and Human Behavior. Princeton University Press, 2007.

43. Vickrey, W. Counterspeculation, auctions, and competitive sealed tenders. J. Finance 16, 1 (1961), 837.

Back to Top


Tim Roughgarden ( is an assistant professor in the computer science and management science and engineering departments at Stanford University, Stanford, CA. He is the recipient of ACM's 2009 Grace Murray Hopper Award.

Back to Top


a. Modern search engines use allocation algorithms that are similar to rank-by-weighted bid algorithms. By historical accident, they use a slightly different pricing rule than that advocated by Myerson's Lemma, although the two pricing rules lead to comparable outcomes and revenue at equilibrium. For details, see Lahaie et al.30

b. This network is called a "paradox" because removing the intuitively helpful zero-cost edgedepriving users of one of their optionsrecovers the optimal solution as an equilibrium, thereby decreasing the cost incurred by all users. Analogously, cutting a taut string in a network of strings and springs that carries a heavy weight can cause the weight to levitate further off the ground!10

c. The complexity classes P and NP are usually defined for decision problems, where the answer sought is a simple "yes" or "no." Here we refer to the similar but more general search versions of P and NP, where for a "yes" instance, the deliverables include a correct solution.

This work is supported in part by NSF CAREER Award CCF-0448664, an ONR Young Investigator Award, an AFOSR MURI grant, and an Alfred P. Sloan Fellowship.


Back to Top


F1Figure 1. Two selfish routing networks with price of anarchy 4/3. One unit of selfish traffic travels from s to t. At equilibrium, all traffic travels on the bottom path and the zig-zag path, respectively. In an optimal solution, traffic is split equally between the two edges and between the two two-hop paths, respectively.

F2Figure 2. Modest overprovisioning guarantees near-optimal routing. (a) displays the per-unit cost c(x) = 1/(u - x) as a function of the load x for an edge with capacity u = 2. (b) shows the worst-case price of anarchy as a function of the fraction of unused network capacity.

Back to Top

©2010 ACM  0001-0782/10/0700  $10.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.


No entries found