NOTOC
In theoretical physics, dimensional regularization is a method introduced by Giambiagi and Bollini as well as – independently and more comprehensively – by 't Hooft and Veltman for regularizing integrals in the evaluation of Feynman diagrams; in other words, assigning values to them that are meromorphic functions of a complex parameter d, the analytic continuation of the number of spacetime dimensions.
Dimensional regularization writes a Feynman integral as an integral depending on the spacetime dimension d and the squared distances (xi−xj)2 of the spacetime points xi, ... appearing in it. In Euclidean space, the integral often converges for −Re(d) sufficiently large, and can be analytically continued from this region to a meromorphic function defined for all complex d. In general, there will be a pole at the physical value (usually 4) of d, which needs to be canceled by renormalization to obtain physical quantities.
showed that dimensional regularization is mathematically well defined, at least in the case of massive Euclidean fields, by using the Bernstein–Sato polynomial to carry out the analytic continuation.
Although the method is most well understood when poles are subtracted and d is once again replaced by 4, it has also led to some successes when d is taken to approach another integer value where the theory appears to be strongly coupled as in the case of the Wilson–Fisher fixed point. A further leap is to take the interpolation through fractional dimensions seriously. This has led some authors to suggest that dimensional regularization can be used to study the physics of crystals that macroscopically appear to be fractals.
It has been argued that Zeta regularization and dimensional regularization are equivalent since they use the same principle of using analytic continuation in order for a series or integral to converge.
Consider an infinite charged line with charge density , and we calculate the potential of a point distance away from the line. The integral diverges:where .
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
This course aims to introduce the basic principles of machine learning in the context of the digital humanities. We will cover both supervised and unsupervised learning techniques, and study and imple
Machine learning and data analysis are becoming increasingly central in sciences including physics. In this course, fundamental principles and methods of machine learning will be introduced and practi
Machine learning is a sub-field of Artificial Intelligence that allows computers to learn from data, identify patterns and make predictions. As a fundamental building block of the Computational Thinki
In physics, especially quantum field theory, regularization is a method of modifying observables which have singularities in order to make them finite by the introduction of a suitable parameter called the regulator. The regulator, also known as a "cutoff", models our lack of knowledge about physics at unobserved scales (e.g. scales of small size or large energy levels). It compensates for (and requires) the possibility that "new physics" may be discovered at those scales which the present theory is unable to model, while enabling the current theory to give accurate predictions as an "effective theory" within its intended scale of use.
The path integral formulation is a description in quantum mechanics that generalizes the action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude. This formulation has proven crucial to the subsequent development of theoretical physics, because manifest Lorentz covariance (time and space components of quantities enter equations in the same way) is easier to achieve than in the operator formalism of canonical quantization.
Renormalization is a collection of techniques in quantum field theory, statistical field theory, and the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering values of these quantities to compensate for effects of their self-interactions. But even if no infinities arose in loop diagrams in quantum field theory, it could be shown that it would be necessary to renormalize the mass and fields appearing in the original Lagrangian.
Catastrophic overfitting (CO) in single-step adversarial training (AT) results in abrupt drops in the adversarial test accuracy (even down to 0%). For models trained with multi-step AT, it has been observed that the loss function behaves locally linearly w ...
2024
, , ,
We consider correlators for the flux of energy and charge in the background of operators with large global U(1) charge in conformal field theory (CFT). It has recently been shown that the corresponding Euclidean correlators generically admit a semiclassica ...
Springer2024
, ,
Deep Neural Networks (DNNs) have obtained impressive performance across tasks, however they still remain as black boxes, e.g., hard to theoretically analyze. At the same time, Polynomial Networks (PNs) have emerged as an alternative method with a promising ...