Course*  Game Theory
*
 Course Level: Senior
This course is an introduction to game theory and strategic thinking. Ideas such as dominance, backward induction, Nash equilibrium, evolutionary stability, commitment, credibility, asymmetric information, adverse selection, and signaling are discussed and applied to games played in class and to examples drawn from economics, politics, the movies, and elsewhere.
*
The lectures in this course are broken down into "Concepts"
Prerequisites
  Microeconomics
  Probability
  Statistics
  Intermediate Microeconomics
  Calculus
Tweet

Lecture 1  Introduction to Game TheoryLecture 1 Introduction to Game TheoryWe introduce Game Theory by playing a game. We organize the game into players, their strategies, and their goals or payoffs; and we learn that we should decide what our goals are before we make choices. With some plausible payoffs, our game is a prisoners' dilemma. We learn that we should never choose a dominated strategy; but that rational play by rational players can lead to bad outcomes. We discuss some prisoners' dilemmas in the real world and some possible realworld remedies. With other plausible payoffs, our game is a coordination problem and has very different outcomes: so different payoffs matter. We often need to think, not only about our own payoffs, but also others' payoffs. We should put ourselves in others' shoes and try to predict what they will do. This is the essence of strategic thinking.*ConceptsConcept 1  What is Strategy? Where does it apply?Concept 2  Elements of a Game: Strategies, Actions, Outcomes and PayoffsConcept 3  Strictly Dominant versus Strictly Dominated StrategiesConcept 4  Contracts and CollusionConcept 5  The Failure of Collusion and Inefficient Outcomes: Prisoner's DilemmaConcept 6  Coordination ProblemsConcept 7  Lesson Recap

Lecture 3  Iterative Deletion and the MedianVoter Theorem

Lecture 4  Best Responses in Soccer and Business Partnerships

Lecture 6  Nash Equilibrium; Dating and Cournot

Lecture 8  Nash Equilibrium; Location, Segregation and RandomizationLecture 8 Nash Equilibrium; Location, Segregation and RandomizationWe first complete our discussion of the candidatevoter model showing, in particular, that, in equilibrium, two candidates cannot be too far apart. Then we play and analyze Schelling's location game. We discuss how segregation can occur in society even if no one desires it. We also learn that seemingly irrelevant details of a model can matter. We consider randomizations first by a central authority (such as in a bussing policy), and then decentralized randomization by the individuals themselves, "mixed strategies." Finally, we look at rock, paper, scissors to see an example of a mixedstrategy equilibrium to a game.*ConceptsConcept 1  Candidate Voter ModelConcept 2  Location and Segregation: Why Outcomes Are Not Necessarily PreferencesConcept 3  Location and Segregation: ExamplesConcept 4  Location and Segregation: Policy ImplicationsConcept 5  Location and Segregation: Central vs Individual RandomizationConcept 6  Pure vs Mixed Strategies: Rock, Paper, Scissors

Lecture 10  Mixed Strategies in Baseball, Dating and Paying Your TaxesLecture 10 Mixed Strategies in Baseball, Dating and Paying Your TaxesWe develop three different interpretations of mixed strategies in various contexts: sport, antiterrorism strategy, dating, paying taxes and auditing taxpayers. One interpretation is that people literally randomize over their choices. Another is that your mixed strategy represents my belief about what you might do. A third is that the mixed strategy represents the proportions of people playing each pure strategy. Then we discuss some implications of the mixed equilibrium in games; in particular, we look how the equilibrium changes in the taxcompliance/auditor game as we increase the penalty for cheating on your taxes.*ConceptsConcept 1  Mixed Strategy Equilibria: ExampleConcept 2  Mixed Strategy Equilibria: Other Examples in SportsConcept 3  Mixed Strategy Equilibria Interpretation 1: Literal RandomizationConcept 4  Mixed Strategy Equilibria Interpretation 2: Players' Beliefs about Each Other's ActionsConcept 5  Mixed Strategy Equilibria Interpretation 3: Prediction of Split on Two or More Courses of Action in a Large PopulationConcept 6  Mixed Strategy Equilibria: Policy Applications

Lecture 11  Evolutionary Stability; Cooperation, Mutation, and EquilibriumLecture 11 Evolutionary Stability; Cooperation, Mutation, and EquilibriumWe discuss evolution and game theory, and introduce the concept of evolutionary stability. We ask what kinds of strategies are evolutionarily stable, and how this idea from biology relates to concepts from economics like domination and Nash equilibrium.*ConceptsConcept 1  Game Theory and Evolution: Evolutionary Stable Strategies  ExampleConcept 2  Game Theory and Evolution: Evolutionary Stable Strategies  DiscussionConcept 3  Game Theory and Evolution: Evolutionary Stable Strategies are always Nash EquilibriaConcept 4  Game Theory and Evolution: Nash Equilibria Are Not Always Evolutionary StableConcept 5  Game Theory and Evolution: Evolutionary Stable Strategies and Nash Equilibria  Discussion

Lecture 12  Evolutionary Stability; Social Convention, Aggression, and CyclesLecture 12 Evolutionary Stability; Social Convention, Aggression, and CyclesWe apply the idea of evolutionary stability to consider the evolution of social conventions. Then we consider games that involve aggressive (Hawk) and passive (Dove) strategies, finding that sometimes, evolutionary populations are mixed. We discuss how such games can help us to predict how behavior might vary across settings. Finally, we consider a game in which there is no evolutionary stable population and discuss an example from nature.*ConceptsConcept 1  Monomorphic and Polymorphic Populations Theory: DefinitionConcept 2  Monomorphic and Polymorphic Populations Theory: Hawk vs DoveConcept 3  Monomorphic and Polymorphic Populations Theory: DiscussionConcept 4  Monomorphic and Polymorphic Populations Theory: Identification and Testability

Lecture 14  Backward Induction; Commitment, Spies, and FirstMover Advantages

Lecture 15  Backward Induction; Chess, Strategies, and Credible ThreatsLecture 15 Backward Induction; Chess, Strategies, and Credible ThreatsWe first discuss Zermelo's theorem: that games like tictactoe or chess have a solution. That is, either there is a way for player 1 to force a win, or there is a way for player 1 to force a tie, or there is a way for player 2 to force a win. The proof is by induction. Then we formally define and informally discuss both perfect information and strategies in such games. This allows us to find Nash equilibria in sequential games. But we find that some Nash equilibria are inconsistent with backward induction. In particular, we discuss an example that involves a threat that is believed in an equilibrium but does not seem credible.*ConceptsConcept 1  First and Second Mover Advantages: Zermelo's TheoremConcept 2  Zermelo's Theorem: ProofConcept 3  Zermelo's Theorem: GeneralizationConcept 4  Zermelo's Theorem: Games of InductionConcept 5  Games of Perfect Information: DefinitionConcept 6  Games of Perfect Information: Economic Example

Lecture 21  Repeated Games; Cooperation vs the End GameLecture 21 Repeated Games; Cooperation vs the End GameWe discuss repeated games, aiming to unpack the intuition that the promise of rewards and the threat of punishment in the future of a relationship can provide incentives for good behavior today. In class, we play prisoners' dilemma twice and three times, but this fails to sustain cooperation. The problem is that, in the last stage, since there is then is future, there is no incentive to cooperate, and hence the incentives unravel from the back. We related this to the realworld problems of a lame duck leader and of maintaining incentives for those close to retirement. But it is possible to sustain good behavior in early stages of some repeated games (even if they are only played a few times) provided the stage games have two or more equilibria to be used as rewards and punishments. This may require us to play bad equilibria tomorrow. We relate this to the trade off between ex ante and ex post efficiency in the law. Finally, we play a game in which the players do not know when the game will end, and we start to consider strategies for this potentially infinitely repeated game.*ConceptsConcept 1  Repeated Interaction: Cooperation vs Defection in the Prisoner's DilemmaConcept 2  Repeated Interaction: The Breakdown of Cooperation and The Lame Duck EffectConcept 3  Repeated Interaction: RenegotiationConcept 4  Failure of Renegotiation: Bankruptsy LawsConcept 5  Repeated Interaction: The Grim Trigger Strategy

Lecture 22  Repeated Games; Cheating, Punishment, and OutsourcingLecture 22 Repeated Games; Cheating, Punishment, and OutsourcingIn business or personal relationships, promises and threats of good and bad behavior tomorrow may provide good incentives for good behavior today, but, to work, these promises and threats must be credible. In particular, they must come from equilibrium behavior tomorrow, and hence form part of a subgame perfect equilibrium today. We find that the grim strategy forms such an equilibrium provided that we are patient and the game has a high probability of continuing. We discuss what this means for the personal relationships of seniors in the class. Then we discuss less draconian punishments, and find there is a trade off between the severity of punishments and the required probability that relationships will endure. We apply this idea to a moralhazard problem that arises with outsourcing, and find that the high wage premiums found in foreign sectors of emerging markets may be reduced as these relationships become more stable.*ConceptsConcept 1  Repeated Interaction: The Grim Trigger Strategy in the Prisoner's DilemmaConcept 2  The Grim Trigger Strategy: Generalization and Real World ExamplesConcept 3  Cooperation in Repeated Interactions: The "One Period Punishment" StrategyConcept 4  Cooperation in Repeated Interactions: Repeated Moral HazardConcept 5  Cooperation in Repeated Interactions: Conclusions