# The Nash Equilibriums Explained in a game Economics Essay

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

### Lets suppose that:

The payoff that most people (more than 16 members) choose the same ( accept or reject) is 1

The payoff that members cannot make an agreement is 0

The pay off that members abstain is 0

If the proposal is accepted, each member would prefer to accept it and each utility assumed to be 1. If the proposal is rejected, each member would prefer to have rejected it and each utility assumed to be 0.

Members

Accept

Reject

Abstain

Members

Accept

1 , 1

0, 0

0,0

Reject

0, 0

1 , 1

0,0

Abstain

0,0

0,0

0, 0

Therefore, the two Nash equilibriums in this game are:

(Accept, Accept) -The proposal is accepted as more than 16 members choose to accept the proposal.

(Reject, Reject) - The proposal is rejected as more than 16 members choose to reject the proposal

I predicted that the outcome will be rejecting the proposal. Because 31 members face three choices: accept, reject, or abstain. Their choices are individual choice and do not be affected by the other's decisions. In addition, the probability of each of these three choices is the same 1/3. Thus, for each member, the probability of choosing accepts the proposal is 1/3, and the probability of choosing reject the proposal (rejects and abstains) is 2/3. As a result, the outcome will be rejecting the proposal.

2 Suppose that the numbers players written down are N1, N2, N3,â€¦,Nn.

In addition, if player i (i=1,2,3,..,N) wishes to win this game, his/her number should come closest to the half the averages of the number submitted by the other players as follow formula: F(Ni)=[(1/2)*( N1+N2+N3+â€¦+Nn-Ni)]/(N-1)

Two assumptions:

Assumed that all the other players choose the same number 10, so the number player i should choose is F(Ni)=(1/2)* [10(N-1)/(N-1)]=5.

Assumed that all the other players choose the same number 1, so the number player I should choose is F(Ni)= (1/2)* [1(N-1)/(N-1)]=0.5

Two hypotheses:

If the players are all rational.

Because player i realize that not all the other players will choose the same number 10, the first thing he/she can confirm that rational player must choose numbers which is below 5.

Furthermore, because each of the N players chooses an integer between 1 and 10, according to the two assumptions above rational players will choose integer numbers from 1 to 5. This is the prerequisite for rational players to play this game in the following steps:

Step 1: Player i understand that there are N rational players participated in this game and the probability of choosing each of these five numbers (1, 2, 3, 4, 5) is the same. So the most possible number for N players to choose is half of the 5 that is 2.5.

Step 2: However, if each players consider this game with a strategic thinking, they will all choose 2.5. Under this condition, the best optimal number for player i to choose is 0.5*2.5=1.25.

Step 3: In addition, if player i know that all the other players will also did what he/she did in step 2, to choose 1.25. Thus, the best optimal number for player i to choose is 0.5*1.25=0.625.

## â€¦â€¦

N step: the best optimal choose for player I is 2.5* . As the players think increasingly rational, n become infinite, 2.5* becomes smaller and smaller, until it reaches the smallest number in this game, which is 1.

As a result, on the premise that all players are rational, the Nash equilibrium in this game is 1.

If the players are not all rational

The first method is based on that we assume all the players are rational, however, in the real world, not all the players can be rational, or some might not be rational enough.

So player i should allow a specified amount of deviation that the other players might choose some really big number to increase half the average of the number submitted by the other players.

As a result, in order to win this game. The most sensible method is to choose a little greater number, for example 2 or 3.

3. (a) According to the information from the question that "Lauren chooses A, Dad observes A, and then decides whether or not to take Lauren to the game."-It means Lauren move first, then Dad moves. So we could define that this is a dynamic game with complete and perfect information.

In this game, Lauren and Dad could choose their own action. A is the portion an hour that Lauren spends annoying Emily. She could either choose annoy Emily for 0.5 or 0.25 of an hour. (A=0.5 or 0.25). Dad observes A, and then decides whether or not to take Lauren to the game. (=0.2 or O, =1/3 or A/2)

The utility function of Lauren is: =+

If Lauren annoys Emily for 0.25 hour and dad let her go: U=0.2+=0.7

If Lauren annoys Emily for 0.25 hour and dad punish her: U=0+=0.5

If Lauren annoys Emily for 0.5 hour and dad let her go: U=0.2+=0.907

If Lauren annoys Emily for 0.5 hour and dad punish her: U=0+=0.707

The utility function of Dad is: =-

If Lauren annoys Emily for 0.25 hour and dad let her go: U=1/3-=0.2708

If Lauren annoys Emily for 0.25 hour and dad punish her: U=0.125-=0.0625

If Lauren annoys Emily for 0.5 hour and dad let her go: U=1/3-=0.0833

If Lauren annoys Emily for 0.5 hour and dad punish her: U=0.25-=0

The game tree with the payoffs:

(b)In this dynamic (or sequential-move) games of complete information, after dad observes Lauren's performance A, he make a decision whether take Lauren to the game or not. In addition, Lauren understands that her performance will directly affect dad's decision. Lauren could think over the effect of her action on her dad's action before she chooses strategy A (0.5 or 0.25).

Table 1 below illustrates the utility of Lauren and Dad: Lauren act first and then she predict dad's strategy.

Dad

Go the game

Punishment

Lauren

0.25

0.7 , 0.2708

0.5 , 0.0625

0.5

0.907 , 0.0833

0.707 , 0

Dad observes the information shown in the table above, and then plan four strategies.

1. (Go, Go) Dad will take Lauren to the game, whatever the portion an hour that Lauren spends annoying Emily.

2. (Punish, Punish) Dad will punish Lauren and not bring her to the game, whatever portion an hour that Lauren spends annoying Emily.

3. (Go, Punish) Dad will take Lauren to the game if she annoys Emily for 0.25 hours, but Dad will punish Lauren and not bring her to the game if she annoys Emily for 0.5 hours.

4. (Punish, Go) Dad will take Lauren to the game if she annoys Emily for 0.5 hours, but Dad will punish Lauren and not bring her to the game if she annoys Emily for 0.25 hours.

The table below show how dad responses to the Lauren's strategies

Dad

Go, Go

Go, Punish

Punish, Go

Punish, Punish

Lauren

0.25

0.7, 0.2708

0.7, 0.2708

0.5, 0.0625

0.5, 0.0625

0.5

0.907,0.0833

0.707, 0

0.907,0.0833

0.707, 0

In table 1, the Nash equilibrium is (0.5, Go), which means that if Lauren annoys Emily for 0.5 hour, dad will bring her to the game. In table 2, there are two Nash equilibriums: (Go, Go) and (Punish, Go). For (Go, Go), it means not matter A is equal to 0.5 or 0.25, dad will bring Lauren to the game. While for (Punish, Go), it means dad will take Lauren to the game if she annoys Emily for 0.5 hours, but dad will punish Lauren and not bring her to the game if she annoys Emily for 0.25 hours.

In addition, we also realize that no matter A is, dad's utility is always bigger if he bring Lauren to the game. Because both of them know the two utility functions, Lauren knows that dad will always bring her to the game rather than punish her as he would like to maximize his utility, so Lauren can choose either 0.5 or 0.25 hours. While for Lauren, the utility of 0.5 hour is greater than that of 0.25 hour. Therefore, Lauren's optimal solution is to choose annoys Emily for 0.5 hour.

The sub-game perfect Nash equilibrium strategies are Lauren spends 0.5 hour annoys Emily, and Dad bring Lauren to the game.

(c) In this question, as the sub-game perfect Nash equilibrium strategies are Lauren spends 0.5 hour annoys Emily and Dad bring Lauren to the game, dad's threat is not credible.

(a) Assumed that player 1 plays the trigger strategy. Player 1 and player 2

cooperate to deny forever, and both get R. However, if one player cheats, he/ she will get T which is great than R, and the other player get S which will smaller than 0. After once cheating, both players will get 0 forever.

If player 1 continues to play the trigger strategy at stage t and after, then he/she will get a sequence of payoffs R, R, R,â€¦, from stage t to the stage infinitely. Discounting these payoffs to stage t, gives us:

R+R+R+R+â€¦+ R= R/ (1- ).

If he/she deviates from the trigger strategy at stage t then he/she will trigger noncooperation. At stage t, player 1 confess and get T (T>R), but player 2 still deny and get S (S<0). Their total utility of one cheating is smaller than that of both cooperate (2R>S+T). When player 2 realize player 1 is cheating, player 2 will confess after stage t forever. Player 1's best response to this is also to confess after state t forever. Therefore player 1 will get a sequence of payoff 0. Discounting these payoffs to stage t gives us:

T+ 0+0+0+â€¦+ 0=T

R/ (1- ).T 1- (R/T)

So, if '1- (R/T)', player 1 cannot be better off is she deviates from the trigger strategy. This implies that if player 2 plays the trigger strategy the player 1's best response is the trigger strategy for1- (R/T). By symmetry, if player 1 plays trigger strategy, then player 2's best response is the trigger strategy. As a result, there is a Nash equilibrium in which both players play the trigger strategy if1- (R/T).

Let's check whether the Nash equilibrium induces Nash equilibrium in every sub game of the infinitely repeated game. Recall that every sub game of the infinitely repeated game is identical to the infinitely.

We have two classes of sub games:

Sub game following a history in which the stage outcomes are all (R, R)

Sub game following a history in which at least on stage outcomes is not

(R, R)

The Nash equilibrium of the infinitely repeated game induces a Nash equilibrium in which each player still plays trigger strategy for the first class of sub games.

The Nash equilibrium of the infinitely repeated game induces a Nash equilibrium in which (0, 0) is played forever for the second class of sub games.

A type of trigger strategy usually applied to the repeated Prisoner's Dilemma in which a player responds in one period with the same action his/her opponent used in the last period.

(b) If player 1 follows Tit-for-Tat, player 2 does not have an incentive to confess first, as if player 2 cooperates she will continue to receive the high ( Deny, Deny )payoff, however, if she confesses and then returns to Tit-for-tat, the players change (confess, Deny) to (Deny, Confess) forever. Player 2's average payoff from this change would be lower than if she had stuck to (Deny, Deny), and would swamp the one-time gain.

But Tit-for-Tat is almost never perfect in the infinitely repeated Prisoner's Dilemma without discounting, because it is not rational for player 1 to punish player 2's initial confess. According to Tit-for-Tat's punishments, which leads to a miserable change of confess and deny, thus player 1 would rather ignore player 2's first confess. The deviation is not from the equilibrium path action of Deny, but from the off-equilibrium action rule of confess in response to confess. Unlike the trigger strategy, Tit-for-Tat cannot enforce cooperation.

5.(a) Iterated dominance equilibrium must be Nash equilibrium. However, not all Nash equilibriums are generated by iterated dominance. Battle of the sexes game is a good counter example. It is a conflict between a wife who prefer to go to a concert and a husband who prefer to go to see a NBA. Even though people are selfish, they deeply love each other and are more willing to sacrifice their preferences. The table of their payoffs is as follows:

Wife

NBA Concert

NBA ( 3, 2 ) â† ( 0, 0 )

Husband â†‘ â†“

Concert ( 0, 0 ) â†’ ( 2, 3 )

There is no iterated dominance equilibrium is the battle of the sexes. This game has two Nash equilibriums: (NBA, NBA) and (Concert, Concert). In this game, because there is no dominance strategy, the Nash equilibriums are not generated by iterated dominance.

(b) The response to this question is 'Yes', so each iterated dominance equilibrium is made up of non-weekly dominance strategy.

The definition of the weakly dominated strategy states that: if strategy B' payoff is strictly higher than strategy A' payoff in some strategy profile, strategy B, which does not lower than any payoff in any strategy profile, will weakly dominate strategy A.

The strictly dominant strategy equilibrium is the only Nash equilibrium. However, if only the iterative process leads to a unique strategy profile in the final result, there do have iterated dominance equilibrium. Assumed that we would like B to be the final existing strategy profile for the player, B should weakly dominate the next strategy to the final strategy

In the iterated dominance equilibrium strategy, strictly dominated strategy required to be get rid of by players. While in weekly dominance equilibrium, it cannot be get rid of. For example, from the table in (a), we could know that because the Nash equilibrium is (NBA, NBA) and (Concert, Concert), no strategy can be taken off and these are not iterated dominance equilibriums.

However, the response might be different if we define "quasi-weakly dominates" as a strategy, which is better than other strategies. Take the Iteration Path Game as an example:

UP

U1 U2 U3

Down D1 (2, 13) (1, 11) (1, 13)

D2 (1, 8) (0, 11) (0, 13)

D3 (0, 12) (1, 11) (0, 12)

The iterated quasi-dominance equilibrium is (U1, D1). Delete the D2, D1 weakly dominate D2 (2,1,1 beat 1,0,0). Afterwards delete U3, as U1 now quasi-weakly dominate U3( 13, 12 is equal to 13,12). Afterwards, delete D3, as D1 weakly dominate D3 (2, 1 beat 0,1). Afterwards, delete U2, as U1 dominate U2 (13 beat 11)

For some strategies which iterated delete the quasi-dominated strategies then take off from the former table, the strategy which is in the equilibrium profile maybe a negative response to them.

(c) In the game theory concept, a rationalizable strategy is an approach of solving or predicting Nash equilibrium. But it does have advantages and disadvantages:

Disadvantages:

The essential prerequisite of rationalizable strategies is that all the players evolved in this game must be rational. However, such rationalizable strategies are too idealized. In most of the situations, this cannot be existed in the real world. Even though you are a rational person, you cannot confirm that you are evolved into a game which is complete rational competitions. Thus, this rationalizable strategy might have negative impact on solving or predicting the outcomes of the game.

If the player is evolving in a game, he/her aim are to maximize his/her personal utility when he/she make a decision. Thus, occasionally, because players only concentrate on their own utility and ignore the other players' utilities, the Nash equilibrium will be a 'zero-sum game' or 'negative-sum game'.

Future trend is with uncertainty information, however, players often make a rational strategy decision on the basis of the past performances and actions. Thus, this would be a limitation of rationalizable strategies.

Advantages:

Rationalizable strategies benefit player from rational analyzing the game and their own situation.

Help player to make decision which could maximize his/her utility

A rationalizable strategy is a useful method to help player realize the rule of cooperation and competition.

6. (a) In a Second-price, the bidder win the auction with the highest valuation, and pay the amount of money which is the same with valuation of the second highest bidder. The English auction occasionally regarded as an open second-price auction, which has a reserve price. The bidder wins the auction with the highest valuation and pays the amount of money which is a bit higher than the valuation of the second highest bidder.

It is given that English auctions and Second-price auctions produce the equivalent revenue. Both are dominant strategies as because bidders will bid their true valuations during the auction.

English auction is mostly seeing being used as it is a fair and open process. Information acquisition is an advantage of this auction. The multiple enables price discovery. The bidder can observe her rivals' behavior such as they abstain from bidding, and bygone highest bids.

From this, the bidder can change her valuation by discover rival' valuations. This is a dominant strategy, in English auction, bidders with individual values will keep biding until the highest bid meet their valuation. However, the nest -to-last bidder will abstain from the auction as his valuation is met. Thus, the bidder bid with the highest valuation, will beat others at a selling rate which is equivalent to the second-highest value.

(b) This is a common value auction because the jar will be worth the same amount to all the people in the class. But every bidder has a distinct bid about the amount of pennies in the jar. As the definition of common value auction: 'the same value for everyone, but different bidders has different information about the underlying value.'

The winner with the highest bids obtain the jar and keep the pennies inside after paying her bid. Because the bidders are all risk averse, the average bid will less than the value of the pennies in the jar. However, the winning bid will greater than the value of the jar. Thus, I do not think that winning bidder will usually make a profit.