The history of calculus is fraught with philosophical debates about the meaning and logical validity of fluxions or infinitesimal numbers. The standard way to resolve these debates is to define the operations of calculus using epsilon–delta procedures rather than infinitesimals. Nonstandard analysis instead reformulates the calculus using a logically rigorous notion of infinitesimal numbers.
Nonstandard analysis originated in the early 1960s by the mathematician Abraham Robinson. He wrote:
the idea of infinitely small or infinitesimal quantities seems to appeal naturally to our intuition. At any rate, the use of infinitesimals was widespread during the formative stages of the Differential and Integral Calculus. As for the objection ... that the distance between two distinct real numbers cannot be infinitely small, Gottfried Wilhelm Leibniz argued that the theory of infinitesimals implies the introduction of ideal numbers which might be infinitely small or infinitely large compared with the real numbers but which were to possess the same properties as the latter.
Robinson argued that this law of continuity of Leibniz's is a precursor of the transfer principle. Robinson continued:
However, neither he nor his disciples and successors were able to give a rational development leading up to a system of this sort. As a result, the theory of infinitesimals gradually fell into disrepute and was replaced eventually by the classical theory of limits.
Robinson continues:
Leibniz's ideas can be fully vindicated and ... they lead to a novel and fruitful approach to classical Analysis and to many other branches of mathematics. The key to our method is provided by the detailed analysis of the relation between mathematical languages and mathematical structures which lies at the bottom of contemporary model theory.
In 1973, intuitionist Arend Heyting praised nonstandard analysis as "a standard model of important mathematical research".
A non-zero element of an ordered field is infinitesimal if and only if its absolute value is smaller than any element of of the form , for a standard natural number.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In mathematics, an infinitesimal number is a quantity that is closer to zero than any standard real number, but that is not zero. The word infinitesimal comes from a 17th-century Modern Latin coinage infinitesimus, which originally referred to the "infinity-th" item in a sequence. Infinitesimals do not exist in the standard real number system, but they do exist in other number systems, such as the surreal number system and the hyperreal number system, which can be thought of as the real numbers augmented with both infinitesimal and infinite quantities; the augmentations are the reciprocals of one another.
In mathematics, a real number is a number that can be used to measure a continuous one-dimensional quantity such as a distance, duration or temperature. Here, continuous means that pairs of values can have arbitrarily small differences. Every real number can be almost uniquely represented by an infinite decimal expansion. The real numbers are fundamental in calculus (and more generally in all mathematics), in particular by their role in the classical definitions of limits, continuity and derivatives.
In mathematics, the system of hyperreal numbers is a way of treating infinite and infinitesimal (infinitely small but non-zero) quantities. The hyperreals, or nonstandard reals, *R, are an extension of the real numbers R that contains numbers greater than anything of the form (for any finite number of terms). Such numbers are infinite, and their reciprocals are infinitesimals. The term "hyper-real" was introduced by Edwin Hewitt in 1948. The hyperreal numbers satisfy the transfer principle, a rigorous version of Leibniz's heuristic law of continuity.
Machine learning and data analysis are becoming increasingly central in many sciences and applications. This course concentrates on the theoretical underpinnings of machine learning.
This course is an introduction to the theory of complex analysis, Fourier series and Fourier transforms (including for tempered distributions), the Laplace transform, and their uses to solve ordinary
Explores Stochastic Gradient Descent and Mean Field Analysis in two-layer neural networks, emphasizing their iterative processes and mathematical foundations.
Our research explores opportunities in using unaltered concrete rubbles from demolition for the digital construction of structural walls. Through research by iterative making, we identify relevant upcycling processes and design strategies and explore new t ...
2023
,
This paper proposes a method for the construction of quadratic serendipity element (QSE) shape functions on planar convex and concave polygons. Existing approaches for constructing QSE shape functions are linear combinations of the pair-wise products of ge ...
ELSEVIER SCIENCE SA2022
We study the homogenization of the Poisson equation with a reaction term and of the eigenvalue problem associated to the generator of multiscale Langevin dynamics. Our analysis extends the theory of two-scale convergence to the case of weighted Sobolev spa ...