**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.

Publication# Time vs. Truth: Age-Distortion Tradeoffs and Strategies for Distributed Inference

Abstract

In 1948, Claude Shannon laid the foundations of information theory, which grew out of a study to find the ultimate limits of source compression, and of reliable communication. Since then, information theory has proved itself not only as a quest to find these limits but also as a toolbox which provides a new machinery and perspectives on problems in various fields. Shannon's original description of the communication problem omitted the semantic aspects. However, modern communication systems necessitate the consideration of semantics, such as fidelity and freshness of data. Shannon did study a problem related to the fidelity of data --- known as the rate-distortion theory, which can be seen as an attempt to incorporate semantics in a weak sense. Yet, freshness has not been widely studied until 2011, when Kaul, Yates and Grueteser introduced a new metric for its assessment, called age of information (AoI).Since 2011, AoI has become a widely studied notion as data freshness becomes increasingly important. But at the same time, not all data is equally important. Aligned with this observation, in Part 1, we study a discrete-time model where each packet has a cost of not being sent, which might depend on the packet content. We study the tradeoff between the age and cost where the sender is confined to packet-based strategies. We show that the optimal tradeoff can be attained with finite-memory strategies and we devise an efficient policy iteration algorithm to find these optimal strategies. Allowing coding across packets significantly extends the packet-based strategies and we show that when the packet payloads are small, the performance can be improved by coding. Furthermore, we study a related problem where some of the equally important packets must be sent in order to control the output rate. `Which packet to send, and when?' is the relevant question of this problem and we show that if the packet arrival process is memoryless, a simple class of strategies attain the optimal tradeoff. The same class of strategies also solve the analogous continuous-time problem, where packets arrive as a Poisson process.In Part 2, we study two distributed hypothesis testing problems: (i) with a centralized architecture and (ii) with a fully decentralized architecture. In the centralized problem, we consider peripheral nodes that send quantized data to the fusion center in a memoryless fashion. The expected number of bits sent by each node under the null hypothesis is kept limited. We characterize the optimal decay rate of the misdetection probability provided that false alarms are rare, and study the tradeoff between the communication rate and maximal misdetection probability decay rate. We use the information theory toolbox and resort to rate-distortion methods to provide upper bounds to the tradeoff curve and we also show that at high rates lattice quantization achieves near-optimal performance. In the decentralized problem, we study a locally-Bayesian scheme where at every time instant, each node chooses to receive information from one of its neighbors at random. We show that under this sparser communication scheme, the agents learn the truth eventually and the asymptotic convergence rate remains the same as the standard algorithms. We also derive large deviation estimates of the log-belief ratios for a special case where each agent replaces its belief with that of the chosen neighbor.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts (48)

Related MOOCs (29)

Related publications (99)

Packet switching

In telecommunications, packet switching is a method of grouping data into packets that are transmitted over a digital network. Packets are made of a header and a payload. Data in the header is used by networking hardware to direct the packet to its destination, where the payload is extracted and used by an operating system, application software, or higher layer protocols. Packet switching is the primary basis for data communications in computer networks worldwide.

Network packet

In telecommunications and computer networking, a network packet is a formatted unit of data carried by a packet-switched network. A packet consists of control information and user data; the latter is also known as the payload. Control information provides data for delivering the payload (e.g., source and destination network addresses, error detection codes, or sequencing information). Typically, control information is found in packet headers and trailers.

Packet loss

Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination. Packet loss is either caused by errors in data transmission, typically across wireless networks, or network congestion. Packet loss is measured as a percentage of packets lost with respect to packets sent. The Transmission Control Protocol (TCP) detects packet loss and performs retransmissions to ensure reliable messaging.

Digital Signal Processing [retired]

The course provides a comprehensive overview of digital signal processing theory, covering discrete time, Fourier analysis, filter design, sampling, interpolation and quantization; it also includes a

Digital Signal Processing

Digital Signal Processing is the branch of engineering that, in the space of just a few decades, has enabled unprecedented levels of interpersonal communication and of on-demand entertainment. By rewo

Digital Signal Processing I

Basic signal processing concepts, Fourier analysis and filters. This module can
be used as a starting point or a basic refresher in elementary DSP

Ali H. Sayed, Virginia Bordignon

This work addresses the problem of sharing partial information within social learning strategies. In social learning, agents solve a distributed multiple hypothesis testing problem by performing two operations at each instant: first, agents incorporate inf ...

Gruber et al. (2022) offered a framework how to explain "Physical time within human time", solving the 'two times problem: Here, I am asking whether such a problem exists at all. To question the question, I will appeal to neurobiological, evolutionary, and ...

Time-sensitive networks provide worst-case guarantees for applications in domains such as the automobile, automation, avionics, and the space industries. A violation of these guarantees can cause considerable financial loss and serious damage to human live ...