In physics, especially quantum field theory, regularization is a method of modifying observables which have singularities in order to make them finite by the introduction of a suitable parameter called the regulator. The regulator, also known as a "cutoff", models our lack of knowledge about physics at unobserved scales (e.g. scales of small size or large energy levels). It compensates for (and requires) the possibility that "new physics" may be discovered at those scales which the present theory is unable to model, while enabling the current theory to give accurate predictions as an "effective theory" within its intended scale of use.
It is distinct from renormalization, another technique to control infinities without assuming new physics, by adjusting for self-interaction feedback.
Regularization was for many decades controversial even amongst its inventors, as it combines physical and epistemological claims into the same equations. However, it is now well understood and has proven to yield useful, accurate predictions.
Regularization procedures deal with infinite, divergent, and nonsensical expressions by introducing an auxiliary concept of a regulator (for example, the minimal distance in space which is useful, in case the divergences arise from short-distance physical effects). The correct physical result is obtained in the limit in which the regulator goes away (in our example, ), but the virtue of the regulator is that for its finite value, the result is finite.
However, the result usually includes terms proportional to expressions like which are not well-defined in the limit . Regularization is the first step towards obtaining a completely finite and meaningful result; in quantum field theory it must be usually followed by a related, but independent technique called renormalization. Renormalization is based on the requirement that some physical quantities — expressed by seemingly divergent expressions such as — are equal to the observed values. Such a constraint allows one to calculate a finite value for many other quantities that looked divergent.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
This course aims to introduce the basic principles of machine learning in the context of the digital humanities. We will cover both supervised and unsupervised learning techniques, and study and imple
Machine learning and data analysis are becoming increasingly central in sciences including physics. In this course, fundamental principles and methods of machine learning will be introduced and practi
Introduction to the path integral formulation of quantum mechanics. Derivation of the perturbation expansion of Green's functions in terms of Feynman diagrams. Several applications will be presented,
NOTOC In theoretical physics, dimensional regularization is a method introduced by Giambiagi and Bollini as well as – independently and more comprehensively – by 't Hooft and Veltman for regularizing integrals in the evaluation of Feynman diagrams; in other words, assigning values to them that are meromorphic functions of a complex parameter d, the analytic continuation of the number of spacetime dimensions. Dimensional regularization writes a Feynman integral as an integral depending on the spacetime dimension d and the squared distances (xi−xj)2 of the spacetime points xi, .
In mathematics and theoretical physics, zeta function regularization is a type of regularization or summability method that assigns finite values to divergent sums or products, and in particular can be used to define determinants and traces of some self-adjoint operators. The technique is now commonly applied to problems in physics, but has its origins in attempts to give precise meanings to ill-conditioned sums appearing in number theory.
In quantum mechanics and quantum field theory, the propagator is a function that specifies the probability amplitude for a particle to travel from one place to another in a given period of time, or to travel with a certain energy and momentum. In Feynman diagrams, which serve to calculate the rate of collisions in quantum field theory, virtual particles contribute their propagator to the rate of the scattering event described by the respective diagram.
Explores Ridge and Lasso Regression for regularization in machine learning models, emphasizing hyperparameter tuning and visualization of parameter coefficients.
Covers the concept of sparse regression and the use of Gaussian additive noise in the context of MAP estimator and regularization.
Covers the path integral representation of propagators for free particles and harmonic oscillators.
This paper studies kernel ridge regression in high dimensions under covariate shifts and analyzes the role of importance re-weighting. We first derive the asymptotic expansion of high dimensional kernels under covariate shifts. By a bias-variance decomposi ...
The boundary correlation functions for a quantum field theory (QFT) in a fixed anti-de Sitter (AdS) background should reduce to S-matrix elements in the flat-space limit. We consider this procedure in detail for four-point functions. With minimal assumptio ...
AMER PHYSICAL SOC2023
Related units (3)
, , ,
Catastrophic overfitting (CO) in single-step adversarial training (AT) results in abrupt drops in the adversarial test accuracy (even down to 0%). For models trained with multi-step AT, it has been observed that the loss function behaves locally linearly w ...