Inductive probabilityInductive probability attempts to give the probability of future events based on past events. It is the basis for inductive reasoning, and gives the mathematical basis for learning and the perception of patterns. It is a source of knowledge about the world. There are three sources of knowledge: inference, communication, and deduction. Communication relays information found using other methods. Deduction establishes new facts based on existing facts. Inference establishes new facts from data. Its basis is Bayes' theorem.
Cosmological argumentA cosmological argument, in natural theology, is an argument which claims that the existence of God can be inferred from facts concerning causation, explanation, change, motion, contingency, dependency, or finitude with respect to the universe or some totality of objects. A cosmological argument can also sometimes be referred to as an argument from universal causation, an argument from first cause, the causal argument, or prime mover argument.
Bayesian information criterionIn statistics, the Bayesian information criterion (BIC) or Schwarz information criterion (also SIC, SBC, SBIC) is a criterion for model selection among a finite set of models; models with lower BIC are generally preferred. It is based, in part, on the likelihood function and it is closely related to the Akaike information criterion (AIC). When fitting models, it is possible to increase the maximum likelihood by adding parameters, but doing so may result in overfitting.
Algorithmic probabilityIn algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s. It is used in inductive inference theory and analyses of algorithms. In his general theory of inductive inference, Solomonoff uses the method together with Bayes' rule to obtain probabilities of prediction for an algorithm's future outputs.
Abductive reasoningAbductive reasoning (also called abduction, abductive inference, or retroduction) is a form of logical inference that seeks the simplest and most likely conclusion from a set of observations. It was formulated and advanced by American philosopher Charles Sanders Peirce beginning in the last third of the 19th century. Abductive reasoning, unlike deductive reasoning, yields a plausible conclusion but does not definitively verify it. Abductive conclusions do not eliminate uncertainty or doubt, which is expressed in retreat terms such as "best available" or "most likely".
Neutral monismNeutral monism is an umbrella term for a class of metaphysical theories in the philosophy of mind, concerning the relation of mind to matter. These theories take the fundamental nature of reality to be neither mental nor physical; in other words it is "neutral". Neutral monism has gained prominence as a potential solution to theoretical issues within the philosophy of mind, specifically the mind–body problem and the hard problem of consciousness. The mind–body problem is the problem of explaining how mind relates to matter.
Training, validation, and test data setsIn machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from input data. These input data used to build the model are usually divided into multiple data sets. In particular, three data sets are commonly used in different stages of the creation of the model: training, validation, and test sets.
Free energy principleThe free energy principle is a theoretical framework that suggests that the brain reduces surprise or uncertainty by making predictions based on internal models and updating them using sensory input. It highlights the brain's objective of aligning its internal model with the external world to enhance prediction accuracy. This principle integrates Bayesian inference with active inference, where actions are guided by predictions and sensory feedback refines them. It has wide-ranging implications for comprehending brain function, perception, and action.
Russell's teapotRussell's teapot is an analogy, formulated by the philosopher Bertrand Russell (1872–1970), to illustrate that the philosophic burden of proof lies upon a person making empirically unfalsifiable claims, rather than shifting the burden of disproof to others. Russell specifically applied his analogy in the context of religion. He wrote that if he were to assert, without offering proof, that a teapot, too small to be seen by telescopes, orbits the Sun somewhere in space between the Earth and Mars, he could not expect anyone to believe him solely because his assertion could not be proven wrong.