Summary
In probability theory and statistics, the probit function is the quantile function associated with the standard normal distribution. It has applications in data analysis and machine learning, in particular exploratory statistical graphics and specialized regression modeling of binary response variables. Mathematically, the probit is the inverse of the cumulative distribution function of the standard normal distribution, which is denoted as , so the probit is defined as Largely because of the central limit theorem, the standard normal distribution plays a fundamental role in probability theory and statistics. If we consider the familiar fact that the standard normal distribution places 95% of probability between −1.96 and 1.96, and is symmetric around zero, it follows that The probit function gives the 'inverse' computation, generating a value of a standard normal random variable, associated with specified cumulative probability. Continuing the example, In general, and The idea of the probit function was published by Chester Ittner Bliss in a 1934 article in Science on how to treat data such as the percentage of a pest killed by a pesticide. Bliss proposed transforming the percentage killed into a "probability unit" (or "probit") which was linearly related to the modern definition (he defined it arbitrarily as equal to 0 for 0.0001 and 1 for 0.9999): He included a table to aid other researchers to convert their kill percentages to his probit, which they could then plot against the logarithm of the dose and thereby, it was hoped, obtain a more or less straight line. Such a so-called probit model is still important in toxicology, as well as other fields. The approach is justified in particular if response variation can be rationalized as a lognormal distribution of tolerances among subjects on test, where the tolerance of a particular subject is the dose just sufficient for the response of interest. The method introduced by Bliss was carried forward in Probit Analysis, an important text on toxicological applications by D.
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (2)
FIN-403: Econometrics
The course covers basic econometric models and methods that are routinely applied to obtain inference results in economic and financial applications.
EE-311: Fundamentals of machine learning
Ce cours présente une vue générale des techniques d'apprentissage automatique, passant en revue les algorithmes, le formalisme théorique et les protocoles expérimentaux.
Related lectures (32)
MLE Applications: Binary Choice Models
Explores the application of Maximum Likelihood Estimation in binary choice models, covering probit and logit models, latent variable representation, and specification tests.
Maximum Likelihood Estimation: Theory
Covers the theory behind Maximum Likelihood Estimation, discussing properties and applications in binary choice and ordered multiresponse models.
Binary Responses: Link Functions and GLMs
Explores link functions for binary responses and the impact of sparseness on model interpretability.
Show more
Related publications (5)