Nuclear weapon yieldThe explosive yield of a nuclear weapon is the amount of energy released such as blast, thermal, and nuclear radiation, when that particular nuclear weapon is detonated, usually expressed as a TNT equivalent (the standardized equivalent mass of trinitrotoluene which, if detonated, would produce the same energy discharge), either in kilotonnes (kt—thousands of tonnes of TNT), in megatonnes (Mt—millions of tonnes of TNT), or sometimes in terajoules (TJ). An explosive yield of one terajoule is equal to .
Nuclear weapons testingNuclear weapons tests are experiments carried out to determine the performance, yield, and effects of nuclear weapons. Testing nuclear weapons offers practical information about how the weapons function, how detonations are affected by different conditions, and how personnel, structures, and equipment are affected when subjected to nuclear explosions. However, nuclear testing has often been used as an indicator of scientific and military strength.
Peaceful nuclear explosionPeaceful nuclear explosions (PNEs) are nuclear explosions conducted for non-military purposes. Proposed uses include excavation for the building of canals and harbours, electrical generation, the use of nuclear explosions to drive spacecraft, and as a form of wide-area fracking. PNEs were an area of some research from the late 1950s into the 1980s, primarily in the United States and Soviet Union. In the U.S., a series of tests were carried out under Project Plowshare.
Nuclear weapons of the United KingdomIn 1952, the United Kingdom became the third country (after the United States and the Soviet Union) to develop and test nuclear weapons, and is one of the five nuclear-weapon states under the Treaty on the Non-Proliferation of Nuclear Weapons. The UK initiated a nuclear weapons programme, codenamed Tube Alloys, during the Second World War. At the Quebec Conference in August 1943, it was merged with the American Manhattan Project.
Observational errorObservational error (or measurement error) is the difference between a measured value of a quantity and its true value. In statistics, an error is not necessarily a "mistake". Variability is an inherent part of the results of measurements and of the measurement process. Measurement errors can be divided into two components: random and systematic. Random errors are errors in measurement that lead to measurable values being inconsistent when repeated measurements of a constant attribute or quantity are taken.
Definite descriptionIn formal semantics and philosophy of language, a definite description is a denoting phrase in the form of "the X" where X is a noun-phrase or a singular common noun. The definite description is proper if X applies to a unique individual or object. For example: "the first person in space" and "the 42nd President of the United States of America", are proper. The definite descriptions "the person in space" and "the Senator from Ohio" are improper because the noun phrase X applies to more than one thing, and the definite descriptions "the first man on Mars" and "the Senator from some Country" are improper because X applies to nothing.
Standard errorThe standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM). The sampling distribution of a mean is generated by repeated sampling from the same population and recording of the sample means obtained. This forms a distribution of different means, and this distribution has its own mean and variance.
Ab initio quantum chemistry methodsAb initio quantum chemistry methods are computational chemistry methods based on quantum chemistry. The term ab initio was first used in quantum chemistry by Robert Parr and coworkers, including David Craig in a semiempirical study on the excited states of benzene. The background is described by Parr. Ab initio means "from first principles" or "from the beginning", implying that the only inputs into an ab initio calculation are physical constants.
P-valueIn null-hypothesis significance testing, the p-value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. Even though reporting p-values of statistical tests is common practice in academic publications of many quantitative fields, misinterpretation and misuse of p-values is widespread and has been a major topic in mathematics and metascience.
Propagation of uncertaintyIn statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g., instrument precision) which propagate due to the combination of variables in the function. The uncertainty u can be expressed in a number of ways. It may be defined by the absolute error Δx.