Revelation principle
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
(Learn how and when to remove this template message)

The revelation principle is a fundamental principle in mechanism design. It states that if a social choice function can be implemented by an arbitrary mechanism (i.e. if that mechanism has an equilibrium outcome that corresponds to the outcome of the social choice function), then the same function can be implemented by an incentivecompatibledirectmechanism (i.e. in which players truthfully report type) with the same equilibrium outcome (payoffs).^{[1]}^{:224–225}
In mechanism design, the revelation principle is of utmost importance in finding solutions. The researcher need only look at the set of equilibrium characterized by incentive compatibility. That is, if the mechanism designer wants to implement some outcome or property, he can restrict his search to mechanisms in which agents are willing to reveal their private information to the mechanism designer that has that outcome or property. If no such direct and truthful mechanism exists, no mechanism can implement this outcome/property. By narrowing the area needed to be searched, the problem of finding a mechanism becomes much easier.
The principle comes in two variants corresponding to the two flavors of incentivecompatibility:
 The dominantstrategy revelationprinciple says that every socialchoice function that can be implemented in dominantstrategies can be implemented by a dominantstrategyincentivecompatible (DSIC) mechanism (introduced by Allan Gibbard^{[2]}).
 The BayesianNash revelationprinciple says that every socialchoice function that can be implemented in BayesianNash equilibrium (Bayesian game, i.e. game of incomplete information) can be implemented by a BayesianNash incentivecompatibility (BNIC) mechanism. This broader solution concept was introduced by Dasgupta, Hammond and Maskin,^{[3]} Holmstrom,^{[4]} and Myerson.^{[5]}
Example
Consider the following example. There is a certain item that Alice values as and Bob values as . The government needs to decide who will receive that item and in what terms.
 A socialchoicefunction is a function that maps a set of individual preferences to a social outcome. An example function is the utilitarian function, which says "give the item to a person that values it the most". We denote a social choice function by Soc and its recommended outcome given a set of preferences by Soc(Prefs).
 A mechanism is a rule that maps a set of individual actions to a social outcome. A mechanism Mech induces a game which we denote by Game(Mech).
 A mechanism Mech is said to implement a socialchoicefunction Soc if, for every combination of individual preferences, there is a Nash equilibrium in Game(Mech) in which the outcome is Soc(Prefs). Two example mechanisms are:
 "Each individual says a number between 1 and 10. The item is given to the individual who says the lowest number; if both say the same number, then the item is given to Alice". This mechanism does NOT implement the utilitarian function, since for every individual who wants the item, it is a dominant strategy to say "1" regardless of his/her true value. This means that in equilibrium the item is always given to Alice, even if Bob values it more.
 Firstprice sealedbid auction is a mechanism which implements the utilitarian function. For example, if , then any action profile in which Bob bids more than Alice and both bids are in the range is a Nashequilibrium in which the item goes to Bob. Additionally, if the valuations of Alice and Bob are random variables drawn independently from the same distribution, then there is a Bayesian Nash equilibrium in which the item goes to the bidder with the highest value.
 A directmechanism is a mechanism in which the set of actions available to each player is just the set of possible preferences of the player.
 A directmechanism Mech is said to be BayesianNashIncentivecompatible (BNIC) if there is a Bayesian Nash equilibrium of Game(Mech) in which all players reveal their true preferences. Some example directmechanisms are:
 "Each individual says how much he values the item. The item is given to the individual that said the highest value. In case of a tie, the item is given to Alice". This mechanism is NOT BNIC, since a player who wants the item is betteroff by saying the highest possible value, regardless of his true value.
 Firstprice sealedbid auction is also NOT BNIC, since the winner is always betteroff by bidding the lowest value that is slightly above the loser's bid.
 However, if the distribution of the players' valuations is known, then there is a variant which is BNIC and implements the utilitarian function.
 Moreover, it is known that Second price auction is BNIC (it is even IC in a stronger sense  dominantstrategy IC). Additionally, it implements the utilitarian function.
Proof
Suppose we have an arbitrary mechanism Mech that implements Soc.
We construct a direct mechanism Mech' that is truthful and implements Soc.
Mech' simply simulates the equilibrium strategies of the players in Game(Mech). I.e:
 Mech' asks the players to report their valuations.
 Based on the reported valuations, Mech' calculates, for each player, his equilibrium strategy in Mech.
 Mech' returns the outcome returned by Mech.
Reporting the true valuations in Mech' is like playing the equilibrium strategies in Mech. Hence, reporting the true valuations is a Nash equilibrium in Mech', as desired. Moreover, the equilibrium payoffs are the same, as desired.
The revelation principle says that for every arbitrary coordinating device a.k.a. correlating there exists another direct device for which the state space equals the action space of each player. Then the coordination is done by directly informing each player of his action.
See also
 Mechanism design
 Incentive compatibility
 The Market for Lemons
 Nash equilibrium
 Game theory
 Constrained Pareto efficiency
 Myerson–Satterthwaite theorem
References
 ^ Vazirani, Vijay V.; Nisan, Noam; Roughgarden, Tim; Tardos, Éva (2007). Algorithmic Game Theory (PDF). Cambridge, UK: Cambridge University Press. ISBN 0521872820.
 ^ Gibbard, A. 1973. Manipulation of voting schemes: a general result. Econometrica 41, 587–601.
 ^ Dasgupta, P., Hammond, P. and Maskin, E. 1979. The implementation of social choice rules: some results on incentive compatibility. Review of Economic Studies 46, 185–216.
 ^ Holmstrom, B. 1977. On incentives and control in organizations. Ph.D. thesis, Stanford University.
 ^ Myerson, R. 1979. Incentivecompatibility and the bargaining problem. Econometrica 47, 61–73.