In mathematics and theoretical physics, zeta function regularization is a type of regularization or summability method that assigns finite values to divergent sums or products, and in particular can be used to define determinants and traces of some self-adjoint operators. The technique is now commonly applied to problems in physics, but has its origins in attempts to give precise meanings to ill-conditioned sums appearing in number theory.
There are several different summation methods called zeta function regularization for defining the sum of a possibly divergent series a1 + a2 + ....
One method is to define its zeta regularized sum to be ζA(−1) if this is defined, where the zeta function is defined for large Re(s) by
if this sum converges, and by analytic continuation elsewhere.
In the case when an = n, the zeta function is the ordinary Riemann zeta function. This method was used by Euler to "sum" the series 1 + 2 + 3 + 4 + ... to ζ(−1) = −1/12.
showed that in flat space, in which the eigenvalues of Laplacians are known, the zeta function corresponding to the partition function can be computed explicitly. Consider a scalar field φ contained in a large box of volume V in flat spacetime at the temperature T = β−1. The partition function is defined by a path integral over all fields φ on the Euclidean space obtained by putting τ = it which are zero on the walls of the box and which are periodic in τ with period β. In this situation from the partition function he computes energy, entropy and pressure of the radiation of the field φ. In case of flat spaces the eigenvalues appearing in the physical quantities are generally known, while in case of curved space they are not known: in this case asymptotic methods are needed.
Another method defines the possibly divergent infinite product a1a2.... to be exp(−ζ′A(0)). used this to define the determinant of a positive self-adjoint operator A (the Laplacian of a Riemannian manifold in their application) with eigenvalues a1, a2, ...., and in this case the zeta function is formally the trace of A−s.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In this course, students learn to design and master algorithms and core concepts related to inference and learning from data and the foundations of adaptation and learning theories with applications.
In physics, especially quantum field theory, regularization is a method of modifying observables which have singularities in order to make them finite by the introduction of a suitable parameter called the regulator. The regulator, also known as a "cutoff", models our lack of knowledge about physics at unobserved scales (e.g. scales of small size or large energy levels). It compensates for (and requires) the possibility that "new physics" may be discovered at those scales which the present theory is unable to model, while enabling the current theory to give accurate predictions as an "effective theory" within its intended scale of use.
The infinite series whose terms are the natural numbers 1 + 2 + 3 + 4 + ⋯ is a divergent series. The nth partial sum of the series is the triangular number which increases without bound as n goes to infinity. Because the sequence of partial sums fails to converge to a finite limit, the series does not have a sum. Although the series seems at first sight not to have any meaningful value at all, it can be manipulated to yield a number of mathematically interesting results.
In mathematics, 1 + 1 + 1 + 1 + ⋯, also written \sum_{n=1}^{\infin} n^0, , or simply , is a divergent series, meaning that its sequence of partial sums does not converge to a limit in the real numbers. The sequence 1n can be thought of as a geometric series with the common ratio 1. Unlike other geometric series with rational ratio (except −1), it converges in neither the real numbers nor in the p-adic numbers for some p. In the context of the extended real number line since its sequence of partial sums increases monotonically without bound.
This paper studies kernel ridge regression in high dimensions under covariate shifts and analyzes the role of importance re-weighting. We first derive the asymptotic expansion of high dimensional kernels under covariate shifts. By a bias-variance decomposi ...
Catastrophic overfitting (CO) in single-step adversarial training (AT) results in abrupt drops in the adversarial test accuracy (even down to 0%). For models trained with multi-step AT, it has been observed that the loss function behaves locally linearly w ...
2024
, ,
Deep Neural Networks (DNNs) have obtained impressive performance across tasks, however they still remain as black boxes, e.g., hard to theoretically analyze. At the same time, Polynomial Networks (PNs) have emerged as an alternative method with a promising ...