**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.

Publication# On Wyner-Ziv networks

Abstract

Wyner and Ziv [1] determined the rate-distortion function of source coding with side information. In this paper, we consider a network extension of their scenario: Many sources have to be compressed in a rate-distortion sense for a decoder that has access to uncompressed side information about the sources. This may model a sensor or multi-camera network situation where the data collection point itself also has a sensor/camera attached to it. Basic results for the case of two sources and side information have been presented recently [2], [3]. The present paper extends these results to the case of more than two sources. In particular, we derive an achievable rate-distortion region, and an outer bound to the best rate-distortion region. These two do not coincide in general, but they do in the special case where the sources are conditionally independent given the side information. We illustrate this result with applications. For example, we discuss two different kinds of rate losses of distributed compression as compared to joint (i.e., centralized) compression. Thereafter, we show that in some cases, our result permits to derive bounds to the rate-distortion region for the distributed compression problem without side information, and we also show how a certain source-channel separation theorem can be established.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related MOOCs (6)

Related concepts (34)

Related publications (53)

Digital Signal Processing [retired]

The course provides a comprehensive overview of digital signal processing theory, covering discrete time, Fourier analysis, filter design, sampling, interpolation and quantization; it also includes a

Digital Signal Processing

Digital Signal Processing is the branch of engineering that, in the space of just a few decades, has enabled unprecedented levels of interpersonal communication and of on-demand entertainment. By rewo

Digital Signal Processing I

Basic signal processing concepts, Fourier analysis and filters. This module can
be used as a starting point or a basic refresher in elementary DSP

Quantization (signal processing)

Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.

Lossless compression

Lossless compression is a class of data compression that allows the original data to be perfectly reconstructed from the compressed data with no loss of information. Lossless compression is possible because most real-world data exhibits statistical redundancy. By contrast, lossy compression permits reconstruction only of an approximation of the original data, though usually with greatly improved compression rates (and therefore reduced media sizes).

Rate–distortion theory

Rate–distortion theory is a major branch of information theory which provides the theoretical foundations for lossy data compression; it addresses the problem of determining the minimal number of bits per symbol, as measured by the rate R, that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding an expected distortion D. Rate–distortion theory gives an analytical expression for how much compression can be achieved using lossy compression methods.

The use of point clouds as an imaging modality has been rapidly growing, motivating research on compression methods to enable efficient transmission and storage for many applications. While compression standards relying on conven- tional techniques such as ...

2023In 1948, Claude Shannon laid the foundations of information theory, which grew out of a study to find the ultimate limits of source compression, and of reliable communication. Since then, information theory has proved itself not only as a quest to find the ...

Gilles Fourestey, Félix Schmidt

The rapid increase in medical and biomedical image acquisition rates has opened up new avenues for image analysis, but has also introduced formidable challenges. This is evident, for example, in selective plane illumination microscopy where acquisition rat ...