**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Lippmann Photography: A Signal Processing Perspective

Gilles Baechler, Arnaud Latty, Michalina Wanda Pacholska, Adam James Scholefield, Martin Vetterli

*IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, *2022

Journal paper

Journal paper

Abstract

Lippmann (or interferential) photography is the first and only analog photography method that can capture the full color spectrum of a scene in a single take. This technique, invented more than a hundred years ago, records the colors by creating interference patterns inside the photosensitive plate. Lippmann photography provides a great opportunity to demonstrate several fundamental concepts in signal processing. Conversely, a signal processing perspective enables us to shed new light on the technique. In our previous work (G. Baechler et al., 2021), we analyzed the spectra of historical Lippmann plates using our own mathematical model. In this paper, we provide the derivation of this model and validate it experimentally. We highlight new behaviors whose explanations were ignored by physicists to date. In particular, we show that the spectra generated by Lippmann plates are in fact distorted versions of the original spectra. We also show that these distortions are influenced by the thickness of the plate and the reflection coefficient of the reflective medium used in the capture of the photographs. We verify our model with extensive experiments on our own Lippmann photographs.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related concepts (11)

Related publications (6)

Digital signal processing

Digital signal processing (DSP) is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The di

Signal processing

Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing signals, such as sound, , potential fields, seismic signals, altimetry processing, and

Mathematical model

A mathematical model is an abstract description of a concrete system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathem

Loading

Loading

Loading

Personal digital assistants or mobile phones applications are not anymore restricted to multimedia or wireless communications, but have been extended to handle Global Positioning System (GPS) functionalities. Consequently, the growing market of GPS capable mobile devices is driving the interest of software receiver solutions as they provide several advantages with respect to traditional hardware implementations. First, they share the same system resources such as the processor, embedded memory and power with other system units, reducing both the size and the costs of their integration. Second, they can be easily reprogrammed – via a firmware update – for incorporating the latest developments, such as the exploitation of the future satellites signals or some improved multipath mitigation techniques. Finally, they offer a more flexible solution for rapid research and development as compared to conventional hardware receivers where the chip design is fixed and obtained after a long integration process. With the increasing performance of modern processors, it becomes now feasible to implement in software a multi-channel GPS receiver operating in real-time. However, a major problem with the software architecture is the large computing resources required for the digital signal processing. Former studies have demonstrated that a straightforward transposition of traditional hardware based architectures into software would lead to an amount of integer operations which is not suitable for today's fastest computers. From this observation, several strategies have been proposed in the literature in order to reduce the complexity of the receiver operations. The first one relies on the utilization of advanced microprocessor instructions set which provides the capability of processing vectors of data by operating on multiple integer values at the same time. This results in significant gains in execution speed, but also severely limits the portability of the code, since the operations are tied to specific processors architectures. Another alternative consists in exploiting the native bitwise representation of the signal. The data bits are stored in separate vectors on which logical parallel operations can be performed. The objective is to take advantage of the universality, high parallelism, and speed of the bitwise operations for which a single integer operation translates into a few simple parallel logical relations. However, the inherent drawback of the bitwise processing is the lack of flexibility as the complexity becomes bit-depth dependent. This thesis has been carried out in the framework of a two-year industrial project (2007-2009) in collaboration with U-blox AG in Thalwil. It aimed to the realization of a multi-channel, platform-independent real-time GPS L1 software receiver. The main challenge of this project consisted in providing real-time performances while keeping the portability of the code to make the receiver suitable for any type of software implementation. In that sense, new techniques and algorithms have been developed for optimizing the processing chain in order to lower the processor load. The main idea consists in regrouping data which share the same characteristics, and process them in batches instead of sequentially. This way, it becomes possible to progressively reduce the data throughput and consequently the amount of operations to perform. A completely new receiver architecture has been proposed and validated through the realization of a functional prototype, thus demonstrating the feasibility of the concept.

Francisco Pereira Correia Pinto

Sound waves propagate through space and time by transference of energy between the particles in the medium, which vibrate according to the oscillation patterns of the waves. These vibrations can be captured by a microphone and translated into a digital signal, representing the amplitude of the sound pressure as a function of time. The signal obtained by the microphone characterizes the time-domain behavior of the acoustic wave field, but has no information related to the spatial domain. The spatial information can be obtained by measuring the vibrations with an array of microphones distributed at multiple locations in space. This allows the amplitude of the sound pressure to be represented not only as a function of time but also as a function of space. The use of microphone arrays creates a new class of signals that is somewhat unfamiliar to Fourier analysis. Current paradigms try to circumvent the problem by treating the microphone signals as multiple "cooperating" signals, and applying the Fourier analysis to each signal individually. Conceptually, however, this is not faithful to the mathematics of the wave equation, which expresses the acoustic wave field as a single function of space and time, and not as multiple functions of time. The goal of this thesis is to provide a formulation of Fourier theory that treats the wave field as a single function of space and time, and allows it to be processed as a multidimensional signal using the theory of digital signal processing (DSP). We base this on a physical principle known as the Huygens principle, which essentially says that the wave field can be sampled at the surface of a given region in space and subsequently reconstructed in the same region, using only the samples obtained at the surface. To translate this into DSP language, we show that the Huygens principle can be expressed as a linear system that is both space- and time-invariant, and can be formulated as a convolution operation. If the input signal is transformed into the spatio-temporal Fourier domain, the system can also be analyzed according to its frequency response. In the first half of the thesis, we derive theoretical results that express the 4-D Fourier transform of the wave field as a function of the parameters of the scene, such as the number of sources and their locations, the source signals, and the geometry of the microphone array. We also show that the wave field can be effectively analyzed on a small scale using what we call the space/time-frequency representation space, consisting of a Gabor representation across the spatio-temporal manifold defined by the microphone array. These results are obtained by treating the signals as continuous functions of space and time. The second half of the thesis is dedicated to processing the wave field in discrete space and time, using Nyquist sampling theory and multidimensional filter banks theory. In particular, we show examples of orthogonal filter banks that effectively represent the wave field in terms of its elementary components while satisfying the requirements of critical sampling and perfect reconstruction of the input. We discuss the architecture of such filter banks, and demonstrate their applicability in the context of real applications, such as spatial filtering and wave field coding.

New advances in the field of image sensors (especially in CMOS technology) tend to question the conventional methods used to acquire the image. Compressive Sensing (CS) plays a major role in this, especially to unclog the Analog to Digital Converters which are generally representing the bottleneck of this type of sensors. In addition, CS eliminates traditional compression processing stages that are performed by embedded digital signal processors dedicated to this purpose. The interest is twofold because it allows both to consistently reduce the amount of data to be converted but also to suppress digital processing performed out of the sensor chip. For the moment, regarding the use of CS in image sensors, the main route of exploration as well as the intended applications aims at reducing power consumption related to these components (i.e. ADC & DSP represent 99% of the total power consumption). More broadly, the paradigm of CS allows to question or at least to extend the Nyquist-Shannon sampling theory. This thesis shows developments in the field of image sensors demonstrating that is possible to consider alternative applications linked to CS. Indeed, advances are presented in the fields of hyperspectral imaging, super-resolution, high dynamic range, high speed and non-uniform sampling. In particular, three research axes have been deepened, aiming to design proper architectures and acquisition processes with their associated reconstruction techniques taking advantage of image sparse representations. How the on-chip implementation of Compressed Sensing can relax sensor constraints, improving the acquisition characteristics (speed, dynamic range, power consumption) ? How CS can be combined with simple analysis to provide useful image features for high level applications (adding semantic information) and improve the reconstructed image quality at a certain compression ratio ? Finally, how CS can improve physical limitations (i.e. spectral sensitivity and pixel pitch) of imaging systems without a major impact neither on the sensing strategy nor on the optical elements involved ? A CMOS image sensor has been developed and manufactured during this Ph.D. to validate concepts such as the High Dynamic Range - CS. A new design approach was employed resulting in innovative solutions for pixels addressing and conversion to perform specific acquisition in a compressed mode. On the other hand, the principle of adaptive CS combined with the non-uniform sampling has been developed. Possible implementations of this type of acquisition are proposed. Finally, preliminary works are exhibited on the use of Liquid Crystal Devices to allow hyperspectral imaging combined with spatial super-resolution. The conclusion of this study can be summarized as follows: CS must now be considered as a toolbox for defining more easily compromises between the different characteristics of the sensors: integration time, converters speed, dynamic range, resolution and digital processing resources. However, if CS relaxes some material constraints at the sensor level, it is possible that the collected data are difficult to interpret and process at the decoder side, involving massive computational resources compared to so-called conventional techniques. The application field is wide, implying that for a targeted application, an accurate characterization of the constraints concerning both the sensor (encoder), but also the decoder need to be defined.