**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# Stability and bounds in aggregate scheduling networks

Résumé

We study networks of FIFO nodes, where flows are constrained by arrival curves. A crucial question with these networks is: Can we derive a bound to the maximum delay that a packet can experience when traversing the network, and to the maximum queue size at each node? For a generic FIFO network these are still open issues: Some examples show that, contrary to common sense, no matter how low the maximum node utilization is in the network, it is possible to build an example of an unstable FIFO network. The importance of this issue lies in the necessity of hard bounds on packet delay and queue size, in order to enable QoS guarantees in these networks. For this reason we choose to tackle this problem through a deterministic approach, based on worst-case behavior. Our first result is the determination of a general method to derive sufficient conditions for the stability of a network: We show how, with a proper choice of the observed variables in the network and with the use of network calculus results, it is possible to derive the expression of an operator whose properties are associated to the stability of the network. Exploiting this method on a simple example, we first derive a generalization of the RIN result to heterogeneous settings and to leaky bucket constrained flows. Through some realistic examples, we show how this method allows networks to achieve a level of utilization which is more than three times larger than the best existing result. By applying the general method to three different variable classes, we derive some new sufficient conditions for stability, that perform largely better than all the main existing results, and we show how they can all be derived from the new sufficient conditions. Finally, we present a new formula for the computation of end-to-end delay bounds in a network of GR nodes.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Concepts associés (10)

Réseau informatique

thumb|upright|Connecteurs RJ-45 servant à la connexion des réseaux informatiques via Ethernet.
thumb|upright
Un réseau informatique ( ou DCN) est un ensemble d'équipements reliés entre eux pour échan

Seau percé

L'algorithme du seau percé (leaky bucket en anglais) permet de contrôler le nombre de paquets passant à chaque seconde par un nœud d'un réseau informatique.
Il est souvent confondu à tort avec le se

FIFO (computing and electronics)

In computing and in systems theory, first in, first out (the first in is the first out), acronymized as FIFO, is a method for organizing the manipulation of a data structure (often, specifically a d

Publications associées (22)

Chargement

Chargement

Chargement

Jean-Yves Le Boudec, Ludovic Bernard Gérard Thomas

Flow reshaping is used in time-sensitive networks (as in the context of IEEE TSN and IETF Detnet) in order to reduce burstiness inside the network and to support the computation of guaranteed latency bounds. This is performed using per-flow regulators (such as the Token Bucket Filter) or interleaved regulators (as with IEEE TSN Asynchronous Traffic Shaping, ATS). The former use one FIFO queue per flow, whereas the latter use one FIFO queue per input port. Both types of regulators are beneficial as they cancel the increase of burstiness due to multiplexing inside the network. It was demonstrated, by using network calculus, that they do not increase the worst-case latency. However, the properties of regulators were established assuming that time is perfect in all network nodes. In reality, nodes use local, imperfect clocks. Time-sensitive networks exist in two flavours: (1) in non-synchronized networks, local clocks run independently at every node and their deviations are not controlled and (2) in synchronized networks, the deviations of local clocks are kept within very small bounds using for example a synchronization protocol (such as PTP) or a satellite based geo-positioning system (such as GPS). We revisit the properties of regulators in both cases. In non-synchronized networks, we show that ignoring the timing inaccuracies can lead to network instability due to unbounded delay in per-flow or interleaved regulators. We propose and analyze two methods (rate and burst cascade, and asynchronous dual arrival-curve method) for avoiding this problem. In synchronized networks, we show that there is no instability with per-flow regulators but, surprisingly, interleaved regulators can lead to instability. To establish these results, we develop a new framework that captures industrial requirements on clocks in both non-synchronized and synchronized networks, and we develop a toolbox that extends network calculus to account for clock imperfections.

Over the past decade, investigations in different fields have focused on studying and understanding real networks, ranging from biological to social to technological. These networks, called complex networks, exhibit common topological features, such as a heavy-tailed degree distribution and the small world effect. In this thesis we address two interesting aspects of complex, and more specifically, social networks: (1) users’ privacy, and the vulnerability of a network to user identification, and (2) dynamics, or the evolution of the network over time. For this purpose, we base our contributions on a central tool in the study of graphs and complex networks: graph sampling. We conjecture that each observed network can be treated as a sample from an underlying network. Using this, a sampling process can be viewed as a way to observe dynamic networks, and to model the similarity of two correlated graphs by assuming that the graphs are samples from an underlying generator graph. We take the thesis in two directions. For the first, we focus on the privacy problem in social networks. There have been hot debates on the extent to which the release of anonymized information to the public can leak personally identifiable information (PII). Recent works have shown methods that are able to infer true user identities, under certain conditions and by relying on side information. Our approach to this problem relies on the graph structure, where we investigate the feasibility of de-anonymizing an unlabeled social network by using the structural similarity to an auxiliary network. We propose a model where the two partially overlapping networks of interest are considered samples of an underlying graph. Using such a model, first, we propose a theoretical framework for the de-anonymization problem, we obtain minimal conditions under which de-anonymization is feasible, and we establish a threshold on the similarity of the two networks above which anonymity could be lost. Then, we propose a novel algorithm based on a Bayesian framework, which is capable of matching two graphs of thousands of nodes - with no side information other than network structures. Our method has several potential applications, e.g., inferring user identities in an anonymized network by using a similar public network, cross-referencing dictionaries of different languages, correlating data from different domains, etc. We also introduce a novel privacy-preserving mechanism for social recommender systems, where users can receive accurate recommendations while hiding their profiles from an untrusted recommender server. For the second direction of this work, we focus on models for network growth, more specifically where the number of edges grows faster than the number of nodes, a property known as densification. The densification phenomenon has been recently observed in various real networks, and we argue that it can be explained simply through the way we observe (sample) networks. We introduce a process of sampling the edges of a fixed graph, which results in the super-linear growth of edges versus nodes, and show that densification arises if and only if the graph has a power-law degree distribution.

Michael Christoph Gastpar, Su Li

The coded cooperative data exchange problem is studied for the fully connected network. In this problem, each node initially only possesses a subset of the K packets making up the file. Nodes make broadcast transmissions that are received by all other nodes. The goal is for each node to recover the full file. In this paper, we present a polynomial-time deterministic algorithm to compute the optimal (i.e., minimal) number of required broadcast transmissions and to determine the precise transmissions to be made by the nodes. A particular feature of our approach is that each of the K − d transmissions is a linear combination of exactly d + 1 packets, and we show how to optimally choose the value of d. We also show how the coefficients of these linear combinations can be chosen by leveraging a connection to Maximum Distance Separable (MDS) codes.

2017