**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# Computation over Gaussian networks with orthogonal components

Résumé

Function computation of arbitrarily correlated discrete sources over Gaussian networks with multiple access components but no broadcast is studied. Two classes of functions are considered: the arithmetic sum function and the frequency histogram function. The arithmetic sum function in this paper is defined as a set of multiple weighted arithmetic sums, which includes averaging of sources and estimating each of the sources as special cases. The frequency histogram function counts the number of occurrences of each argument, which yields many important statistics such as mean, variance, maximum, minimum, median, and so on. For a class of networks, an approximate computation capacity is characterized. The proposed approach first abstracts Gaussian networks into the corresponding modulosum multiple-access channels via lattice codes and linear network coding and then computes the desired function by using linear Slepian–Wolf source coding.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Publications associées (14)

Chargement

Chargement

Chargement

Concepts associés (21)

Computation

A computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computations are mathematical equations and computer algorithms.
Mechanical or electron

Linear network coding

In computer networking, linear network coding is a program in which intermediate nodes transmit data from source nodes to sink nodes by means of linear combinations.
Linear network coding may be use

Réseau informatique

thumb|upright|Connecteurs RJ-45 servant à la connexion des réseaux informatiques via Ethernet.
thumb|upright
Un réseau informatique ( ou DCN) est un ensemble d'équipements reliés entre eux pour échan

Network information theory studies the communication of information in a network and considers its fundamental limits. Motivating from the extensive presence of the networks in the daily life, the thesis studies the fundamental limits of particular networks including channel coding such as Gaussian multiple access channel with feedback and source coding such as lossy Gaussian Gray-Wyner network.
On one part, we establish the sum-Capacity of the Gaussian multiple-access channel with feedback. The converse bounds that are derived from the dependence-balance argument of Hekstra and Willems meet the achievable scheme introduced by Kramer. Even though the problem is not convex, the factorization of lower convex envelope method that is introduced by Geng and Nair, combined with a Gaussian property are invoked to compute the sum-Capacity. Additionally, we characterize the rate region of lossy Gaussian Gray-Wyner network for symmetric distortion. The problem is not convex, thus the method of factorization of lower convex envelope is used to show the Gaussian optimality of the auxiliaries. Both of the networks, are a long-standing open problem.
On the other part, we consider the common information that is introduced by Wyner and the natural relaxation of Wyner's common information. Wyner's common information is a measure that quantifies and assesses the commonality between two random variables. The operational significance of the newly introduced quantity is in Gray-Wyner network. Thus, computing the relaxed Wyner's common information is directly connected with computing the rate region in Gray-Wyner network. We derive a lower bound to Wyner's common information for any given source. The bound meets the exact Wyner's common information for sources that are expressed as sum of a common random variable and Gaussian noises. Moreover, we derive an upper bound on an extended variant of information bottleneck.
Finally, we use Wyner's common information and its relaxation as a tool to extract common information between datasets. Thus, we introduce a novel procedure to construct features from data, referred to as Common Information Components Analysis (CICA). We establish that in the case of Gaussian statistics, CICA precisely reduces to Canonical Correlation Analysis (CCA), where the relaxing parameter determines the number of CCA components that are extracted. In this sense, we establish a novel rigorous connection between information measures and CCA, and CICA is a strict generalization of the latter. Moreover, we show that CICA has several desirable features, including a natural extension to beyond just two data sets.

This thesis looks at efficient information processing for two network applications: content delivery with caching and collecting summary statistics in wireless sensor networks. Both applications are studied under the same paradigm: function computation over networks, where distributed source nodes cooperatively communicate some functions of individual observations to one or multiple destinations. One approach that always works is to convey all observations and then let the destinations compute the desired functions by themselves. However, if the available communication resources are limited, then revealing less unwanted information becomes critical. Centered on this goal, this thesis develops new coding schemes using information-theoretic tools.
The first part of this thesis focuses on content delivery with caching. Caching is a technique that facilitates reallocation of communication resources in order to avoid network congestion during peak-traffic times. An information-theoretic model, termed sequential coding for computing, is proposed to analyze the potential gains offered by the caching technique. For the single-user case, the proposed framework succeeds in verifying the optimality of some simple caching strategies and in providing guidance towards optimal caching strategies. For the two-user case, five representative subproblems are considered, which draw connections with classic source coding problems including the Gray-Wyner system, successive refinement, and the Kaspi/Heegard-Berger problem. Afterwards, the problem of distributed computing with successive refinement is considered. It is shown that if full data recovery is required in the second stage of successive refinement, then any information acquired in the first stage will be useful later in the second stage.
The second part of this thesis looks at the collection of summary statistics in wireless sensor networks. Summary statistics include arithmetic mean, median, standard deviation, etc, and they belong to the class of symmetric functions. This thesis develops arithmetic computation coding in order to efficiently perform in-network computation for weighted arithmetic sums and symmetric functions. The developed arithmetic computation coding increases the achievable computation rate from $\Theta((\log L)/L)$ to $\Theta(1/\log L)$, where $L$ is the number of sensors. Finally, this thesis demonstrates that interaction among sensors is beneficial for computation of type-threshold functions, e.g., the maximum and the indicator function, and that a non-vanishing computation rate is achievable.

Michael Christoph Gastpar, Chien-Yi Wang

Function computation over Gaussian networks with orthogonal components is studied for arbitrarily correlated discrete memoryless sources. Two classes of functions are considered: 1) the arithmetic sum function and 2) the type function. The arithmetic sum function in this paper is defined as a set of multiple weighted arithmetic sums, which includes averaging of the sources and estimating each of the sources as special cases. The type or frequency histogram function counts the number of occurrences of each argument, which yields various fundamental statistics, such as mean, variance, maximum, minimum, median, and so on. The proposed computation coding first abstracts Gaussian networks into the corresponding modulo sum multiple-access channels via nested lattice codes and linear network coding and then computes the desired function using linear Slepian-Wolf source coding. For orthogonal Gaussian networks (with no broadcast and multiple-access components), the computation capacity is characterized for a class of networks. For Gaussian networks with multiple-access components (but no broadcast), an approximate computation capacity is characterized for a class of networks.