Frequentist inferenceFrequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or proportion of findings in the data. Frequentist-inference underlies frequentist statistics, in which the well-established methodologies of statistical hypothesis testing and confidence intervals are founded. The primary formulation of frequentism stems from the presumption that statistics could be perceived to have been a probabilistic frequency.
P-valueIn null-hypothesis significance testing, the p-value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. Even though reporting p-values of statistical tests is common practice in academic publications of many quantitative fields, misinterpretation and misuse of p-values is widespread and has been a major topic in mathematics and metascience.
Decay productIn nuclear physics, a decay product (also known as a daughter product, daughter isotope, radio-daughter, or daughter nuclide) is the remaining nuclide left over from radioactive decay. Radioactive decay often proceeds via a sequence of steps (decay chain). For example, 238U decays to 234Th which decays to 234mPa which decays, and so on, to 206Pb (which is stable): In this example: 234Th, 234mPa,...,206Pb are the decay products of 238U. 234Th is the daughter of the parent 238U. 234mPa (234 metastable) is the granddaughter of 238U.
Exact testIn statistics, an exact (significance) test is a test such that if the null hypothesis is true, then all assumptions made during the derivation of the distribution of the test statistic are met. Using an exact test provides a significance test that maintains the type I error rate of the test () at the desired significance level of the test. For example, an exact test at a significance level of , when repeated over many samples where the null hypothesis is true, will reject at most of the time.
KaonIn particle physics, a kaon (ˈkeɪ.ɒn), also called a K meson and denoted _Kaon, is any of a group of four mesons distinguished by a quantum number called strangeness. In the quark model they are understood to be bound states of a strange quark (or antiquark) and an up or down antiquark (or quark). Kaons have proved to be a copious source of information on the nature of fundamental interactions since their discovery in cosmic rays in 1947.
Vector mesonIn high energy physics, a vector meson is a meson with total spin 1 and odd parity (usually noted as JP = 1−). Vector mesons have been seen in experiments since the 1960s, and are well known for their spectroscopic pattern of masses. The vector mesons contrast with the pseudovector mesons, which also have a total spin 1 but instead have even parity. The vector and pseudovector mesons are also dissimilar in that the spectroscopy of vector mesons tends to show nearly pure states of constituent quark flavors, whereas pseudovector mesons and scalar mesons tend to be expressed as composites of mixed states.
D mesonThe D mesons are the lightest particle containing charm quarks. They are often studied to gain knowledge on the weak interaction. The strange D mesons (Ds) were called "F mesons" prior to 1986. The D mesons were discovered in 1976 by the Mark I detector at the Stanford Linear Accelerator Center. Since the D mesons are the lightest mesons containing a single charm quark (or antiquark), they must change the charm (anti)quark into an (anti)quark of another type to decay.
Likelihood-ratio testIn statistics, the likelihood-ratio test assesses the goodness of fit of two competing statistical models, specifically one found by maximization over the entire parameter space and another found after imposing some constraint, based on the ratio of their likelihoods. If the constraint (i.e., the null hypothesis) is supported by the observed data, the two likelihoods should not differ by more than sampling error. Thus the likelihood-ratio test tests whether this ratio is significantly different from one, or equivalently whether its natural logarithm is significantly different from zero.
MeasurementMeasurement is the quantification of attributes of an object or event, which can be used to compare with other objects or events. In other words, measurement is a process of determining how large or small a physical quantity is as compared to a basic reference quantity of the same kind. The scope and application of measurement are dependent on the context and discipline. In natural sciences and engineering, measurements do not apply to nominal properties of objects or events, which is consistent with the guidelines of the International vocabulary of metrology published by the International Bureau of Weights and Measures.
Coefficient of variationIn probability theory and statistics, the coefficient of variation (COV), also known as Normalized Root-Mean-Square Deviation (NRMSD), Percent RMS, and relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is defined as the ratio of the standard deviation to the mean (or its absolute value, , and often expressed as a percentage ("%RSD"). The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay.