Diarmuid O'Conghaile
Senior Sophister

What interests economists is to model the strategies on which equilibrium is based, to understand and explain human behaviour. Traditional microeconomic theory regards the individual as a rational and utility-maximizing price-taker, which is to say that the individuals actions have no effect on the aggregate economy. Game theory, on the other hand looks at situations in which the aggregate outcome is dependent on the individuals actions. The individuals in the model know this and then form and pursue rational and utility-maximising strategies on the basis of that knowledge. This is what we call strategic interaction.

In the real world, we can observe an economy moving, over time, from one state to another, from one equilibrium to another. This movement takes place continuously, rather than in discrete jumps, but we do nonetheless speak of the economy as moving from state a to b. If we then employ the concept of strategic interaction we can say that the economy arrives at state b as a result of the interacting behaviour of the individuals involved, and that each individuals behaviour is based on his rational strategy for maximising utility. The unique equilibrium, state b, can therefore be thought of as occurring where the individuals strategies intersect. Game theory seeks to model these strategies on which an equilibrium is based. We construct small economies (self-contained worlds in which all other things are equal because there are no other things) and we watch what the players do, what choices they make and what equilibria they arrive at. This may strike the layman as a rather arcane activity, but the results have practical usefulness, in so far as game theory helps us to explain the strategies which underlie real-world outcomes, empirically observed.

A solution concept is a model of strategic interaction. In the real world, strategic interaction always produces a unique equilibrium outcome, a solution. In the world of games, the ideal solution concept is the one which similarly always produces a solution. No such concept exists at present. There are, as we shall see, many games in which we cannot produce any equilibrium outcome. Such situations represent modelling breakdowns. However, as already suggested, the strategies on which an outcome is based are of greater theoretical importance than the outcome itself. And it is always possible to demonstrate (for games with a finite number of players, each having a finite number of possible actions to choose from) unique (Nash) equilibrium in strategies, be they mixed strategies or pure strategies. And by running these strategies off we will always produce an outcome. The fact that we cannot say which outcome, in a given situation, is of secondary importance. We need, finally, to make one refinement of our discussion so far before we proceed. When we speak here about producing an equilibrium outcome, we mean logically deducing the outcome with recourse only to the players strategies, as defined, and the rules and parameters of the game. The actual mechanics of how the equilibrium is arrived at are not touched upon, and may even be regarded as something of a mystery.

Dominant Strategy Equilibrium Concepts

The dominant strategy equilibrium concept is the most basic solution concept, in some sense, the orginal or natural solution concept and therefore a good starting point. I will illustrate the workings of the concept with reference to the following games.[23]

The Prisoners Dilemma

                    Player 1            
                 Deny     Confess  
Player    Deny   (-1,-1)  (-10,0)  
        Confess  (0,-10)  (-8,-8)  

(Confess, Confess) is the unique equilibrium solution in the one-shot Prisoners Dilemma. We speak here of an equilibrium in strictly dominant strategies. For either row or column to play confess is the unique optimal strategy (ie that strategy which maximises game utility) regardless of what the other player will do.

Game 1

                Left  Middle   Right  
Player     Top  (4,3)  (2,7)    (0,4)  
        Bottom  (5,5)  (5,-1)  (-4,-2)

Game 2

                Left  Middle  Right  
Player     Top  (0,2)  (1,0)   (0,1)  
        Bottom  (0,3)  (0,1)   (1,0)  

Games 1 and 2 represent refinements of the dominant strategy equilibrium concept. In Game 1, there is no equilibrium in strictly dominant strategies. But by the process of iterative deletion of strcitly dominated strategies we can arrive at (bottom, left) as a unique equilibrium. A strictly dominant strategy is each players best reponse to the other players optimal strategies. The iterative process relies on the perfect rationaltiy of all players and each successive iteration adds another layer of assumed rationality: Im watching you watching me watching you and so on. In Game 2, we see the idea of weak dominance. In general, the weakly dominant strategy is player 1s best response (to any strategy played by player 2) in the sense that it is at least as good as any alternative strategy player 1 might employ and better than at least one alternative strategy.

The dominant strategy equipment offers us four solution concepts: strict dominance, iterated strict dominance, weak dominance, and iterated weak dominance. The concept of weak dominance is not entirely satisfactory because it relies upon indifference. Strict dominance, on the other hand, is a very satisfactory, and very strong equilbrium concept. We can regard it, in some sense, as the natural equilibrium concept.

Introducing the Nash Equilibrium Concept

Generally, however strict dominance proves too strong. That is, the rigorous conditions which define the strictly dominant strategy equilibrium do not occur often, as is the case with the following games.

Boxed Pigs


	          Press   Wait     
Large     Press   (5,1)   (4,4)    
          Wait    (9,-1)  (0,0)    

In Boxed Pigs, there is no dominant strategy equilibrium of any kind. We require a new concept, the Nash equilibrium. A Nash equilibrium involves a set of beliefs (each players beliefs about the others utility-maximising strategy), confirmed in equilbrium, where each players strategy is the best response to the others. Put precisely, in a Nash equilibrium each players strategy is optimal given the others strategy. (We must always remember, however, that the other persons strategy is given only in the sense that it is expected.) In Boxed Pigs, the Nash equilibrium is (press, wait). Each pigs strategy is the best reponse to the others. We immediately see that the concept of Nash equilibrium is similar to the concept of iterated dominance, and, in fact, by theorem, every dominant strategy equilibrium is also a Nash equilibrium. But the converse is not true. Nash equilibrium is a much more powerful concept.[24] It produces equilibria where the dominant strategy concept cannot.

The basic thesis of this essay is that the Nash equilibrium concept, while not being ideal, is nevertheless an intuitively plausible and universally applicable concept. I will now move on to discuss some of the problems with the Nash equilibrium concept, and will look also at some of the ways in which these problems can be tackled.

Problems and Solutions in Static Games
Battle of the Sexes

                 Opera   Fight    
Male     Opera   (4,2)   (1,1)    
         Fight   (0,0)   (2,4)    

The problem which arises in the Battle of the Sexes is that we have two equilibria and no way of deciding between them. (This is a specific instance of the general problem of multiple equilibria.) Battle of the Sexes is a co-ordination game. To maximise utility, each player must co-ordinate his strategy with the other players strategy. Communication, by definition, is not allowed. Various solutions to this problem are suggested in the literature. If the game is continuously repeated, for exmaple, it may prove possible for the players to evolve some form of mute communication through how they employ their strategies. It is also suggested that significant payoff dominance for one or both of the players may act as a focal point, making that particular equilibrium more attractive than the alternative(s). Another idea is that of correlated strategies. These involve exogenous focal points, which are certainty important in reality, but they have one or two drawbacks from a theoretical point of view unless they can be incorporated into the model. One possible method of doing this would be to give the first (or a pre-game) move to Nature and allowing all players to observe the move. But this alters the model rather than solving the problem, a distinction which may be important.

Welfare Game

                       Try    Be idle  
Government    Aid     (3,2)   (-1,3)   
              No Aid  (-1,1)  (0,0)    

In problems of multiple equilibria where none of the equilibria are compelling and/or where re-specification is either impossible or undesirable, the mixed strategies approach can be employed. We now discuss this method with reference to a different problem: namely, when there is no equilibrium in pure strategies. This is the case in the Welfare Game, which, in a sense, is the inverse of the co-ordination game. The players are trying to out-guess each other. We solve the problem by allowing the players to randomize their actions. Since Nash equilibrium is an equilibrium in beliefs, the beliefs in this case will be probabilities. The government will have a belief about the probability of the welfare-maximising pauper deciding to idle. And the pauper will have a belief about the probability of the welfare-maximising government granting welfare. These beliefs are confirmed in equilibrium. Since the welfare-maximising paupers strategy (which is equal in equilibrium to the govenments beliefs about the paupers strategy) is his best response to the welfare maximising behaviour of the government, we derive the paupers equilibrium strategy by maximising expected government utility using the von Neumann-Morgenstern method. The governments equilibrium strategy is calculated in the corresponding way. And we arrive at a Nash equilibrium in mixed strategies. This is a remarkably powerful methodology. It guarantees us an equilibrium in strategies in every case (subject to the conditions of the Nash theorem), and is, in that sense, an ideal solution concept.

There are, however, certain difficulties with the mixed strategies approach. To begin with, we emphasised in the introduction how important the assumption is to economics that human behaviour is non-random. In the mixed strategies approach, we have allowed players to randomize their actions. The behaviour, however, is not truly random. It simply appears random. It is sometimes suggested in the literature that we interpret the equilibrium probabilities in the following way. In aggregate, the pauper will choose to be idle four out of five times. He may consciously randomize his decision the first time around (by tossing a coin for example) but thereafter his strategy is no longer random. It depends on all previous moves of which he is assumed to have perfect recall. This can create a problem though. If the pauper looks for work in the first of a series of five games, then the government knows that hes going to idle for the next four and will deviate from its Nash equilibrium strategy and play no aid in each of the subsequent moves. Another way of looking at the randomization is to have the government facing a population of paupers, and believing that 20% will choose the pure strategy of seeking work, and that the other 80% will choose the pure strategy of idling. Alternatively, one individual may have been drawn randomly from a population of the above characteristics. The government, in this case, does not know whether the one pauper it is facing is an idler or a worker. A more serious technical problem than the idea of randomness, is the fact that a player who chooses a mixed strategy is indifferent between the pure strategies on offer. Should this cause the player to deviate, even slightly, from the equilibrium mixed strategy, then the equilibrium collapses. Mixed strategy equilibria are thus not as robust as would be desired.

On the subject of robustness, and of slight deviations from equilibria strategies, we ought to mention the trembling-hand concept. Rasmusen (1989) defines trembling-hand perfection as an equilibrium concept which says that for a strategy to be part of an equilibrium it must continue to be optimal for a player even if there is a small chance that the other player will pick some out-of equilibrium action (that the other players hand will tremble). We sketch the idea here in its simplest and most intuitive form.

Game 3

                Player 1           
                 Left     Right    
Player 2    Top  (10,0)   (5,2)    
          Bottom (10,1)   (2,0)    

Consider the game above. The original equilibrium in this game is (bottom, left). But if there is any chance that row is aware of, that column might make an error and deviate from his equilibrium strategy (even if there is only a small chance of such an error). Bottom is no longer strictly optimal row, depending on his aversion to risk, may play top instead. Column anticipates such behaviour and responds optimally with right. The trembling-hand perfect equilibrium is (top, right). We can note here that (top, right) is a better outcome for column and a worse outcome for row. Column has gained by having a reputation for unsteadiness of hand. We do not regard the trembling-hand concept as being particularly intuitively plausible, although it clearly does depend on the individual circumstances of the game. People tend to neglect small risks. Life is too difficult as it is for us to worry about getting knocked down everytime we cross the road, especially if we want to get to the other side. There are a multitude of small risks that we could worry about. Which trembles are we to fear ?

Subgame Perfection In Dynamic Games
Entry Deterrence Game

                Fight    Don't      
Entrant  Enter   (0,0)   (2,1)     
          Stay   (1,9)   (1,9)     

In the above game, the entrant moves first and is observed by the incumbent. This game has one weakly dominant equilibrium (stay out, fight), which is also a Nash equilibrium, and one further Nash equilibrium (enter, dont fight). If we look at the game in its extensive form, it becomes clear that one of the Nash equilibria is not viable.

fight (1,9)

stay out B1 dont fight (1,9)

A enter B2 fight (0,0)

dont fight (2,1)

If the entrant enters, that is a fait accompli, and the rational welfare-maximising response of the incumbent is to choose dont fight. However, it is the threat of fight that supports the (stay out, fight) equilibrium. And if that threat is not credible (as is the case when the welfare-maximising incumbent will always play dont fight in response to enter), then the (stay out, fight) equilbrium is unviable. A rational entrant will never play stay out when he knows that he can maximise welfare in the (enter, dont fight) equilibrium. For the threat to be credible, the incumbent would have to commit himself to the fight strategy regardless of what the entrant plays. But it is hard to envisage how such a commitment itself could be credible. Once we arrive at node B2 ( the B2 subgame) the rational welfare maximising strategy of the incumbent is to choose not fight regardless of what he has said beforehand. This is the issue of subgame perfection. It arises only in dynamic games and involves the notion of the equilibrium path. As we can see in the Entry Deterrence Game, it is an off-equilibrium response (the lack of credibility of the fight threat) which undermines the (stay out, fight) equilibrium. Or to put it another way, the (stay out, fight) equilibriumm depends on irrational off-equilibrium behaviour. A Nash equilibrium can thus be redefined as subgame perfect if the players strategies constitute a Nash equilibrium in evey subgame (Selten, 1965).

Final Comments

I have confined myself here to an introductory and intuitive discussion of the Nash equilibrium concept. There are many refining (strenghtening) mechanisms which we simply havent mentioned. In dynamic games of incomplete or asymmetric information, for example, players update their equilibrium beliefs by observing what of the hidden or private information has been revealed (or suggested) in the preceeding move(s). We use the Bayes Rule for this updating process, and the refined Nash concept is termed perfect Bayesian equilibrium. There are also other refinements such as the Kohlberg and Mertens (1986) concept of stability, and the Myerson (1978) concept of properness. In general, as we would expect, as games get richer (ie as the models get more complex) the solution concept needs to be refined. But the basic idea of the Nash equilibria remains intact.

Varian (1987) writes as follows: Another problem with the Nash equilibrium of game is that it does not necessarily lead to Pareto efficient outcomes. This misses the point. It may not be optimal for people to behave in a Nash fashion. But if people do behave in a Nash fashion (or, to be more precise, in what approximates to a Nash fashion), as evidence suggests that they often do, then Nash is the optimal way of modelling that behaviour. As Rasmusen says in his introduction : It (game theory) has moved away from maximization by calculus, and inefficient allocations are common. The players act rationally, but the consequences are often bizarre, which makes application to a world of intelligent man and ludicrous outcomes appropriate.


Gibbons, R (1991) A Primer In Game Theory

Kreps, DM, (1990) A Course in Microeconomic Theory

Rasmusen, E, (1989) Games and Information

Varian, HR, (1987) Intermediate Microeconomics