In mathematics, an iterated function is a function X → X (that is, a function from some set X to itself) which is obtained by composing another function f : X → X with itself a certain number of times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial object, the result of applying a given function is fed again in the function as input, and this process is repeated. For example on the image on the right: with the circle‐shaped symbol of function composition. Iterated functions are objects of study in computer science, fractals, dynamical systems, mathematics and renormalization group physics. The formal definition of an iterated function on a set X follows. Let X be a set and f: X → X be a function. Defining f n as the n-th iterate of f (a notation introduced by Hans Heinrich Bürmann and John Frederick William Herschel), where n is a non-negative integer, by: and where idX is the identity function on X and f g denotes function composition. That is, (f g)(x) = f (g(x)), always associative. Because the notation f n may refer to both iteration (composition) of the function f or exponentiation of the function f (the latter is commonly used in trigonometry), some mathematicians choose to use ∘ to denote the compositional meaning, writing f^∘n(x) for the n-th iterate of the function f(x), as in, for example, f^∘3(x) meaning f(f(f(x))). For the same purpose, f n was used by Benjamin Peirce whereas Alfred Pringsheim and Jules Molk suggested ^nf(x) instead. In general, the following identity holds for all non-negative integers m and n, This is structurally identical to the property of exponentiation that aman = am + n, i.e. the special case f(x) = ax. In general, for arbitrary general (negative, non-integer, etc.) indices m and n, this relation is called the translation functional equation, cf. Schröder's equation and Abel equation. On a logarithmic scale, this reduces to the nesting property of Chebyshev polynomials, Tm(Tn(x)) = Tm n(x), since Tn(x) = cos(n arccos(x)).

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (32)
EE-556: Mathematics of data: from theory to computation
This course provides an overview of key advances in continuous optimization and statistical analysis for machine learning. We review recent learning formulations and models as well as their guarantees
CS-108: Practice of object-oriented programming
Les étudiants perfectionnent leurs connaissances en Java et les mettent en pratique en réalisant un projet de taille conséquente. Ils apprennent à utiliser et à mettre en œuvre les principaux types de
EE-566: Adaptation and learning
In this course, students learn to design and master algorithms and core concepts related to inference and learning from data and the foundations of adaptation and learning theories with applications.
Show more
Related lectures (175)
Trust region methods: framework & algorithms
Covers trust region methods, focusing on the framework and algorithms.
Analysis of Trust Regions with Cauchy Steps
Explores the analysis of trust regions with Cauchy steps in optimization.
Convergence Criteria
Discusses convergence criteria and when iteration stops, focusing on known cases and metrics attention.
Show more
Related publications (150)

Geometry-Driven Stock-Constrained Truss Design via Equilibrium-Based Structural Models

Corentin Jean Dominique Fivet, Tao Sun

This paper presents a geometry-driven approach to form-finding with reused stock elements. Our proposed workflow uses a K-mean algorithm to cluster stock elements and incorporate their geometrical values early in the form-finding process. A feedback loop i ...
Springer2024

Estimation of Self-Exciting Point Processes from Time-Censored Data

Thomas Alois Weber, Philipp Schneider

Self-exciting point processes, widely used to model arrival phenomena in nature and society, are often difficult to identify. The estimation becomes even more challenging when arrivals are recorded only as bin counts on a finite partition of the observatio ...
2023

Incremental Learning in Diagonal Linear Networks

Raphaël Jean Berthier

Diagonal linear networks (DLNs) are a toy simplification of artificial neural networks; they consist in a quadratic reparametrization of linear regression inducing a sparse implicit regularization. In this paper, we describe the trajectory of the gradient ...
Brookline2023
Show more
Related concepts (37)
Fixed-point theorem
In mathematics, a fixed-point theorem is a result saying that a function F will have at least one fixed point (a point x for which F(x) = x), under some conditions on F that can be stated in general terms. The Banach fixed-point theorem (1922) gives a general criterion guaranteeing that, if it is satisfied, the procedure of iterating a function yields a fixed point.
Transfer operator
In mathematics, the transfer operator encodes information about an iterated map and is frequently used to study the behavior of dynamical systems, statistical mechanics, quantum chaos and fractals. In all usual cases, the largest eigenvalue is 1, and the corresponding eigenvector is the invariant measure of the system. The transfer operator is sometimes called the Ruelle operator, after David Ruelle, or the Perron–Frobenius operator or Ruelle–Perron–Frobenius operator, in reference to the applicability of the Perron–Frobenius theorem to the determination of the eigenvalues of the operator.
Dyadic transformation
The dyadic transformation (also known as the dyadic map, bit shift map, 2x mod 1 map, Bernoulli map, doubling map or sawtooth map) is the mapping (i.e., recurrence relation) (where is the set of sequences from ) produced by the rule Equivalently, the dyadic transformation can also be defined as the iterated function map of the piecewise linear function The name bit shift map arises because, if the value of an iterate is written in binary notation, the next iterate is obtained by shifting the binary point one bit to the right, and if the bit to the left of the new binary point is a "one", replacing it with a zero.
Show more
Related MOOCs (7)
Optimization: principles and algorithms - Linear optimization
Introduction to linear optimization, duality and the simplex algorithm.
Optimization: principles and algorithms - Linear optimization
Introduction to linear optimization, duality and the simplex algorithm.
Optimization: principles and algorithms - Network and discrete optimization
Introduction to network optimization and discrete optimization
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.