In philosophy and mathematics, Newcomb's paradox, also known as Newcomb's problem, is a thought experiment involving a game between two players, one of whom is able to predict the future.
Newcomb's paradox was created by William Newcomb of the University of California's Lawrence Livermore Laboratory. However, it was first analyzed in a philosophy paper by Robert Nozick in 1969 and appeared in the March 1973 issue of Scientific American, in Martin Gardner's "Mathematical Games". Today it is a much debated problem in the philosophical branch of decision theory.
There is a reliable predictor, another player, and two boxes designated A and B. The player is given a choice between taking only box B or taking both boxes A and B. The player knows the following:
Box A is transparent and always contains a visible 1,000.BoxBisopaque,anditscontenthasalreadybeensetbythepredictor:IfthepredictorhaspredictedthattheplayerwilltakebothboxesAandB,thenboxBcontainsnothing.IfthepredictorhaspredictedthattheplayerwilltakeonlyboxB,thenboxBcontains1,000,000.
The player does not know what the predictor predicted or what box B contains while making the choice.
In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly." The problem continues to divide philosophers today. In a 2020 survey, a modest plurality of professional philosophers chose to take both boxes (39.0% versus 31.2%).
Game theory offers two strategies for this game that rely on different principles: the expected utility principle and the strategic dominance principle. The problem is called a paradox because two analyses that both sound intuitively logical give conflicting answers to the question of what choice maximizes the player's payout.
Considering the expected utility when the probability of the predictor being right is almost certain or certain, the player should choose box B.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Decision theory (or the theory of choice; not to be confused with choice theory) is a branch of applied probability theory and analytic philosophy concerned with the theory of making decisions based on assigning probabilities to various factors and assigning numerical consequences to the outcome. There are three branches of decision theory: Normative decision theory: Concerned with the identification of optimal decisions, where optimality is often determined by considering an ideal decision-maker who is able to calculate with perfect accuracy and is in some sense fully rational.
Retrocausality, or backwards causation, is a concept of cause and effect in which an effect precedes its cause in time and so a later event affects an earlier one. In quantum physics, the distinction between cause and effect is not made at the most fundamental level and so time-symmetric systems can be viewed as causal or retrocausal. Philosophical considerations of time travel often address the same issues as retrocausality, as do treatments of the subject in fiction, but the two phenomena are distinct.
A temporal paradox, time paradox, or time travel paradox, is a paradox, an apparent contradiction, or logical contradiction associated with the idea of time travel or other foreknowledge of the future. While the notion of time travel to the future complies with current understanding of physics via relativistic time dilation, temporal paradoxes arise from circumstances involving hypothetical time travel to the past – and are often used to demonstrate its impossibility.