Strategy (game theory)In game theory, a player's strategy is any of the options which they choose in a setting where the outcome depends not only on their own actions but on the actions of others. The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship. A player's strategy will determine the action which the player will take at any stage of the game.
Cournot competitionCournot competition is an economic model used to describe an industry structure in which companies compete on the amount of output they will produce, which they decide on independently of each other and at the same time. It is named after Antoine Augustin Cournot (1801–1877) who was inspired by observing competition in a spring water duopoly. It has the following features: There is more than one firm and all firms produce a homogeneous product, i.e., there is no product differentiation; Firms do not cooperate, i.
Backward inductionBackward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by examining the last point at which a decision is to be made and then identifying what action would be most optimal at that moment. Using this information, one can then determine what to do at the second-to-last time of decision. This process continues backwards until one has determined the best action for every possible situation (i.e.
Sequential gameIn game theory, a sequential game is a game where one player chooses their action before the others choose theirs. The other players must have information on the first player's choice so that the difference in time has no strategic effect. Sequential games are governed by the time axis and represented in the form of decision trees. Sequential games with perfect information can be analysed mathematically using combinatorial game theory. Decision trees are the extensive form of dynamic games that provide information on the possible ways that a given game can be played.
Nash equilibriumIn game theory, the Nash equilibrium, named after the mathematician John Nash, is the most common way to define the solution of a non-cooperative game involving two or more players. In a Nash equilibrium, each player is assumed to know the equilibrium strategies of the other players, and no one has anything to gain by changing only one's own strategy. The principle of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to competing firms choosing outputs.
Signaling gameIn game theory, a signaling game is a simple type of a dynamic Bayesian game. The essence of a signalling game is that one player takes an action, the signal, to convey information to another player, where sending the signal is more costly if they are conveying false information. A manufacturer, for example, might provide a warranty for its product in order to signal to consumers that its product is unlikely to break down. The classic example is of a worker who acquires a college degree not because it increases their skill, but because it conveys their ability to employers.
Perfect informationIn economics, perfect information (sometimes referred to as "no hidden information") is a feature of perfect competition. With perfect information in a market, all consumers and producers have complete and instantaneous knowledge of all market prices, their own utility, and own cost functions. In game theory, a sequential game has perfect information if each player, when making any decision, is perfectly informed of all the events that have previously occurred, including the "initialization event" of the game (e.
Complete informationIn economics and game theory, complete information is an economic situation or game in which knowledge about other market participants or players is available to all participants. The utility functions (including risk aversion), payoffs, strategies and "types" of players are thus common knowledge. Complete information is the concept that each player in the game is aware of the sequence, strategies, and payoffs throughout gameplay.
Game treeIn the context of Combinatorial game theory, which typically studies sequential games with perfect information, a game tree is a graph representing all possible game states within such a game. Such games include well-known ones such as chess, checkers, Go, and tic-tac-toe. This can be used to measure the complexity of a game, as it represents all the possible ways a game can pan out. Due to the large game trees of complex games such as chess, algorithms that are designed to play this class of games will use partial game trees, which makes computation feasible on modern computers.
Bayesian gameIn game theory, a Bayesian game is a strategic decision-making model which assumes players have incomplete information. Players hold private information relevant to the game, meaning that the payoffs are not common knowledge. Bayesian games model the outcome of player interactions using aspects of Bayesian probability. They are notable because they allowed, for the first time in game theory, for the specification of the solutions to games with incomplete information. Hungarian economist John C.