**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Fractal compression

Summary

Fractal compression is a lossy compression method for s, based on fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image. Fractal algorithms convert these parts into mathematical data called "fractal codes" which are used to recreate the encoded image.
Iterated function system
Fractal image representation may be described mathematically as an iterated function system (IFS).
We begin with the representation of a , where the image may be thought of as a subset of . An IFS is a set of contraction mappings ƒ1,...,ƒN,
According to these mapping functions, the IFS describes a two-dimensional set S as the fixed point of the Hutchinson operator
That is, H is an operator mapping sets to sets, and S is the unique set satisfying H(S) = S. The idea is to construct the IFS such that this set S is the input binary image. The set S can be recovered from the IFS by fixed point iteration: for any nonempty compact initial set A0, the iteration Ak+1 = H(Ak) converges to S.
The set S is self-similar because H(S) = S implies that S is a union of mapped copies of itself:
So we see the IFS is a fractal representation of S.
IFS representation can be extended to a grayscale image by considering the image's graph as a subset of . For a grayscale image u(x,y), consider the set
S = {(x,y,u(x,y))}. Then similar to the binary case, S is described by an IFS using a set of contraction mappings ƒ1,...,ƒN, but in ,
A challenging problem of ongoing research in fractal image representation is how to choose the ƒ1,...,ƒN such that its fixed point approximates the input image, and how to do this efficiently.
A simple approach for doing so is the following partitioned iterated function system (PIFS):
Partition the image domain into range blocks Ri of size s×s.
For each Ri, search the image to find a block Di of size 2s×2s that is very similar to Ri.
Select the mapping functions such that H(Di) = Ri for each i.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (1)

Related courses (19)

Related concepts (8)

Related lectures (169)

COM-404: Information theory and coding

The mathematical principles of communication that govern the compression and transmission of data and the design of efficient methods of doing so.

COM-406: Foundations of Data Science

We discuss a set of topics that are important for the understanding of modern data science but that are typically not taught in an introductory ML course. In particular we discuss fundamental ideas an

EE-555: Systems and architectures for signal processing

Study of the essential components and implementation technologies of digital signal processing and communication systems from the theoretical, algorithmic and system implementation point of view.

Fractal compression

Fractal compression is a lossy compression method for s, based on fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image. Fractal algorithms convert these parts into mathematical data called "fractal codes" which are used to recreate the encoded image. Iterated function system Fractal image representation may be described mathematically as an iterated function system (IFS).

Wavelet transform

In mathematics, a wavelet series is a representation of a square-integrable (real- or complex-valued) function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform. A function is called an orthonormal wavelet if it can be used to define a Hilbert basis, that is a complete orthonormal system, for the Hilbert space of square integrable functions.

Digital image

A digital image is an composed of picture elements, also known as pixels, each with finite, discrete quantities of numeric representation for its intensity or gray level that is an output from its two-dimensional functions fed as input by its spatial coordinates denoted with x, y on the x-axis and y-axis, respectively. Depending on whether the is fixed, it may be of vector or raster type. Raster image Raster images have a finite set of digital values, called picture elements or pixels.

Introduction to Structural Mechanics

Introduces the basics of structural mechanics, covering plane trusses, internal forces, and equilibrium analysis.

Compression: Prefix-Free CodesCOM-406: Foundations of Data Science

Explains prefix-free codes for efficient data compression and the significance of uniquely decodable codes.

SVD: Singular Value DecompositionMATH-212: Analyse numérique et optimisation

Covers the concept of Singular Value Decomposition (SVD) for compressing information in matrices and images.

The main goal in network information theory is to identify fundamental limits of communication over networks, and design solutions which perform close to such limits. After several decades of effort, many important problems still do not have a characterization of achievable performance in terms of a finite dimensional description. Given this discouraging state of affairs, a natural question to ask is whether there are systematic approaches to make progress on these open questions. Recently, there has been significant progress on several open questions by seeking a (provably) approximate characterization for these open questions. The main goal of approximation in network information theory is to obtain a universal approximation gap between the achievable and the optimal performance. This approach consists of four ingredients: simplify the model, obtain optimal solution for the simplified model, translate this optimal scheme and outer bounds back to the original model, and finally bound the gap between what can be achieved using the obtained technique and the outer bound. Using such an approach, recent progress has been made in several problems such as the Gaussian interference channel, Gaussian relay networks, etc. In this thesis, we demonstrate that this approach is not only successful in problems of transmission over noisy networks, but gives the first approximation for a network data compression problem. We use this methodology to (approximately) resolve problems that have been open for several decades. Not only do we give theoretical characterization, but we also develop new coding schemes that are required to satisfy this approximate optimality property. These ideas could give insights into efficient design of future network communication systems. This thesis is split into two main parts. The first part deals with the approximation in lossy network data compression. Here, a lossy data compression problem is approximated by a lossless counterpart problem, where all the bits in the binary expansion of the source above the required distortion have to be losslessly delivered to the destination. In particular, we study the multiple description (MD) problem, based on the multi-level diversity (MLD) coding problem. The symmetric version of the MLD problem is well-studied, and we can directly use it to approximate the symmetric MD problem. We formulate the asymmetric multi-level diversity problem, and solve it for three-description case. The optimal solution for this problem, which will be later used to approximate the asymmetric multiple description problem, is based on jointly compressing of independent sources. In both symmetric and asymmetric cases, we derive inner and outer bounds for the achievable rate region, which together with the gap analysis, provide an approximate solution for the problem. In particular, we resolve the symmetric Gaussian MD problem, which has been open for three decades, to within 1 bit. In the second part, we initiate a study of a Gaussian relay-interference network, in which relay (helper) nodes are to facilitate competing information flows over a wireless network. We focus on a two-stage relay-interference network where there are weak cross-links, causing the networks to behave like a chain of Z Gaussian channels. For these Gaussian ZZ and ZS networks, we establish an approximate characterization of the rate region. The outer bounds to the capacity region are established using genie-aided techniques that yield bounds sharper than the traditional cut-set outer bounds. For the inner bound of the ZZ network, we propose a new interference management scheme, termed interference neutralization, which is implemented using structured lattice codes. This technique allows for over-the-air interference removal, without the transmitters having complete access to the interfering signals. We use insights gained from an exact characterization of the corresponding linear deterministic version of the problem, in order to study the Gaussian network. We resolve the Gaussian relay-interference network to within 2 bits. The new interference management technique (interference neutralization) shows the use of structured lattice codes in the problem. We also consider communication from a source to a destination over a wireless network with the help of a set of authenticated relays, and presence of an adversarial jammer who wishes to disturb communication. We focus on a special diamond network, and show that use of interference suppression (nulling) is crucial to approach the capacity of the network. The exact capacity characterization for the deterministic network, along with an approximate characterization (to within 4 bits) for the Gaussian network is provided. The common theme that binds the diverse network communication problems in this thesis is that of approximate characterization, when exact resolutions are difficult. The approach of focusing on the deterministic/lossless problems underlying the noisy/lossy network communication problems has allowed us to develop new techniques to study these questions. These new techniques might be of independent interest in other network information theory problems.