**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# New algorithmic methods for real-time transportation problems

Résumé

Two of the most basic problems encountered in numerical optimization are least-squares problems and systems of nonlinear equations. The use of more and more complex simulation tools on high performance computers requires solving problems involving an increasingly large number of variables. The main thrust of this thesis the design of new algorithmic methods for solving large-scale instances of these two problems. Although they are relevant in many different applications, we concentrate specifically on real applications encountered in the context of Intelligent Transportation Systems to illustrate their performances. First we propose a new approach for the estimation and prediction of OriginDestination tables. This problem is usually solved using a Kalman filter approach, which refers to both formulation and resolution algorithm. We prefer to consider a explicit least-squares formulation. It offers convenient and flexible algorithms especially designed to solve largescale problems. Numerical results provide evidence that this approach requires significantly less computation effort than the Kalman filter algorithm. Moreover it allows to consider larger problems, likely to occur in real applications. Second a new class of quasi-Newton methods for solving systems of nonlinear equations is presented. The main idea is to generalize classical methods by building a model using more than two previous iterates. We use a least-squares approach to calibrate this model, as exact interpolation requires a fixed number of iterates, and may be numerically problematic. Based on classical assumptions we give a proof of local convergence of this class of methods. Computational comparisons with standard quasi-Newton methods highlight substantial improvements in terms of robustness and number of function evaluations. We derive from this class of methods a matrix-free algorithm designed to solve large-scale systems of nonlinear equations without assuming any particular structure on the problems. We have successfully tried out the method on problems with up to one million variables. Computational experiments on standard problems show that this algorithm outperforms classical large-scale quasi-Newton methods in terms of efficiency and robustness. Moreover, its numerical performances are similar to Newton-Krylov methods, currently considered as the best to solve large-scale systems of equations. In addition, we provide numerical evidence of the superiority of our method for solving noisy systems of nonlinear equations. This method is then applied to the consistent anticipatory route guidance generation. Route guidance refers to information provided to travelers in an attempt to facilitate their decisions relative to departure time, travel mode and route. We are specifically interested in consistent anticipatory route guidance, in which real-time traffic measurements are used to make short-term predictions, involving complex simulation tools, of future traffic conditions. These predictions are the basis of the guidance information that is provided to users. By consistent, we mean that the anticipated traffic conditions used to generate the guidance must be similar to the traffic conditions that the travelers are going to experience on the network. The problem is tricky because, contrarily to weather forecast where the real system under consideration is not affected by information provision, the very fact of providing travel information may modify the future traffic conditions and, therefore, invalidate the prediction that has been used to generate it. Bottom (2000) has proposed a general fixed point formulation of this problem with the following characteristics. First, as guidance generation involves considerable amounts of computation, this fixed point problem must be solved quickly and accurately enough for the results to be timely and of use to drivers. Secondly the unavailability of a closed-form objective function and the presence of noise due to the use of simulation tools prevent from using classical algorithms. A number of simulation experiments based on two system software including DynaMIT a state-of-the-art, real-time computer system for traffic estimation and prediction, developed at the Intelligent Transportation Systems Program of the Massachusetts Institute of Technology (MIT), have been run. These numerical results underline the good behavior of our large-scale method compared to classical fixed point methods for solving the consistent anticipatory route guidance problem. We close with some comments about future promising directions of research.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Publications associées (71)

Chargement

Chargement

Chargement

Concepts associés (20)

Résolution de problème

vignette|Résolution d'un problème mathématique.
La résolution de problème est le processus d'identification puis de mise en œuvre d'une solution à un problème.
Méthodologie
Dans l'ind

Algorithme

thumb|Algorithme de découpe d'un polygone quelconque en triangles (triangulation).
Un algorithme est une suite finie et non ambiguë d'instructions et d’opérations permettant de résoudre une classe de

Système de transport intelligent

Les systèmes de transport intelligents (STI) (en anglais : intelligent transportation systems - ITS) sont les applications des nouvelles technologies de l'information et de la communication au domai

In 1971, the first microprocessor produced in mass production had 2300 transistor and was able to compute 60'000 operations per second at 740 kHz. Nowadays, even a common gaming console uses a central unit including 243 millions of transistors running at 4 GHz separated in a 9-cores architecture. This exponential evolution following the famous "Moore's Law" is possible thanks to the innovations made in every domain implied in a chip conception and allows chipset creation targeting a more and more wide range of applications. This variety of processors implies a great number of variations in the hardware choices, to match application (general purpose processor, digital signal processor, application-specific processor…), performance requirements and / or other operative constraints such as power consumptions. The applications running on these powerful solutions are growing in complexity exponentially as well, relying on more and more sophisticated software. Consequently the design team is facing a real challenge when designing and implementing an optimal architectural solution. To ease their task, the trend in computer aided design is to develop integrated frameworks for simulation and design, providing all the tools needed during the process, in a top-down approach. These tools allow system specification by means of abstract concurrent entities down to final implementation by means of successive refinements of the different entities and of the communication among then. In parallel, these hardware solutions run applications of a constantly increasing complexity, leading to the need for more and more intensive specification and validation tasks, by means of abstract software descriptions, typically referred to as verification models. These models implement all the functionalities of the application and thus, from the end of this specification phase, the abstract description of the application in the verification model becomes the most reliable reference for the corresponding application. All these reasons make the intuitive understanding of the processing complexity and tend to create redesign loops in the conception process when realizing that the initial architecture choices were sub-optimal (or even wrong), preventing the system to meet the desired performance constraints. In this case, a costly re-design iteration is mandatory, targeting a new platform choice but introducing delay and cost in the design causing the project to arrive too late on the market. Obviously, to eliminate these re-design iterations and to be able to help as soon as possible, the preliminary complexity analysis should be performed before starting the design of the target architecture. Since the first step of the design cycle implies an algorithm description through a verification model, it is desirable to perform the complexity study on this model. Indeed, though considerable efforts are carried out on developing powerful co-design tools, by means high-level abstract descriptions, fast prototyping and flexible design-reusability; no automatic tool is available to help the designer in the first analysis and comprehension phase. Moreover, the size of verification models is growing together with the complexity of the applications, and it is not surprising to have to deal with verification models of tens of thousands of source code lines; consequently, paper-and-pencil and manual-instrumentation analysis methods are neither feasible nor reliable any longer. Furthermore, static complexity evaluations, not taking into account the dynamic dependency of the complexity from the input data, are becoming less and less meaningful for the development of cost-effective solutions. Worst-case analysis is typically suitable for real-time control applications but it cannot be applied in all the cases in which quality-of-service/cost trade-offs are of concern; for instance, a multimedia or signal-processing architecture developed to support the worst-case scenario would simply result to be too expensive and under-exploited for most of the processing time. This dissertation presents a new approach to algorithmic complexity analysis, called the Software Instrumentation Tool (SIT), based on the previous observations and aiming at filling the gap in state of the art tools. The main goal of the SIT is to provide a high-level analysis framework for both a faithful design-oriented evaluation of the computational complexity of verification models and a hardware simulation of the projected architecture. The breakthrough in this version of the SIT is to provide a full set of results about complexity in a complete framework, allowing to use all these data in a methodology able to fill the gap from verification model to hardware implementation or algorithm porting. The analysis is performed at the highest possible abstraction level, i.e. at source code language level, in order to adhere to the same degree of abstraction of verification models; that is, the analysis is not biased by the underlying architecture over which the verification model is tested or by the compilation process. The analysis strictly depends on the input data, in order to take into account the real algorithmic complexity in real working conditions and relies on a completely automatic process, performed directly on the source code of verification models, without forcing the designer to make any manual modification such as source code rewriting or manual instrumentation. SIT approach to complexity analysis is based on the instrumentation by means of C++ operator overloading. SIT analyzes C source code and is capable to instrument, automatically, any legal C data-type or operation or statement, providing eventually an exhaustive complexity analysis at C language level. The tool is building on a core system (a core for the base algorithmic complexity, a second for memory analysis) and supports module extensions (a module for critical path evaluation, a module for data transfer, both can be plugged in the memory core). All the cores and modules are reporting their results in a single visual interface, allowing a quick and efficient complexity overview as well as an in-depth result exploration. As SIT allows extracting meaningful complexity measures directly from the verification models in the preliminary analysis and comprehension phase, the designer benefits from a tool-assisted approach allowing to step to the latest design phases through intermediate tools. The results provided by the first version of the SIT are computational data (what types of operation are performed? On which type of data? ...) and memory usage analysis (how many memory accesses? In which function?). This build of the SIT includes a customizable memory usage analysis, made possible by the simulation of (customizable) virtual memory architectures; this allows to focus the analysis on other aspects of the algorithmic complexity which could not otherwise be estimated by means of static analysis methods, or software profilers, or instruction-level profilers. Another novelty in the SIT is the data-transfer module, allowing both a quick and efficient overview of the main data flows inside the virtual architecture (thanks to a graphical representation of the mains transfers) and a precise in depth analysis of bandwidth and data transfers between functions (thanks to the detailed view included in Sitview). The latest result is the critical path evaluation analysis, allowing to detect the major functions of the execution tree and to extract information about parallelism at different level, from function to algorithm level. All these information are provided through the SIT graphical interface SitView, and a methodology is described to step from the high-level verification model in C to an actor language partioning based on the tool analysis. This partition can be used with dedicated actor languages to create a FPGA code or to create an optimized hardware solution, most probably better than an instinctively created hardware.

Michel Bierlaire, Frank Crittin

Route guidance refers to information provided to travelers in an attempt to facilitate their decisions relative to departure time, travel mode and route. We are specifically interested in consistent anticipatory route guidance, in which real-time traffic measurements are used to make short-term predictions, involving complex simulation tools, of future traffic conditions. These predictions are the basis of the guidance information that is provided to users. By consistent, we mean that the anticipated traffic conditions used to generate the guidance must be similar to the traffic conditions that the travelers are going to experience on the network. The problem is tricky because, contrarily to weather forecast where the real system under consideration is not affected by information provision, the very fact of providing travel information may modify the future traffic conditions and, therefore, invalidate the prediction that has been used to generate it. Bottom (2000) has proposed a general fixed point formulation of this problem with the following characteristics. First, as guidance generation involves considerable amounts of computation, this fixed point problem must be solved quickly and accurately enough for the results to be timely delivered to drivers. Secondly the unavailability of analytical forms for the objective function and the presence of noise due to the use of simulation tools prevent from using classical algorithms. We propose in this paper an adaptation of the generalized secant method (cf. the related presentation “A generalization of secant methods for solving nonlinear systems of equations”) in order to handle the intrinsic characteristics of the consistent anticipatory route guidance generation, especially the very high dimension associated with real problems. We present then a number of simulation experiments based on two simulation tools in order to compare the performances of the diverse algorithms. The first is a simple simulator implementing the framework of the route guidance generation on a small network, which is used to illustrate the properties of this problem and the behavior of the algorithms. Then, we present a large-scale case study of size 124575 using DynaMIT, a simulation-based real-time Dynamic Traffic Assignment system designed to compute and disseminate anticipatory route guidance. These results point out the real-time potential of the method as its ability to handle large scale problem.

2004Recent advances in data processing and communication systems have led to a continuous increase in the amount of data communicated over today’s networks. These large volumes of data pose new challenges on the current networking infrastructure that only offers a best effort mechanism for data delivery. The emergence of new distributed network architectures, such as peer-to-peer networks and wireless mesh networks, and the need for efficient data delivery mechanisms have motivated researchers to reconsider the way that information is communicated and processed in the networks. This has given rise to a new research field called network coding. The network coding paradigm departs from the traditional routing principle where information is simply relayed by the network nodes towards the destination, and introduces some intelligence in the network through coding at the intermediate nodes. This in-network data processing has been proved to substantially improve the performance of data delivery systems in terms of throughput and error resilience in networks with high path diversity. Motivated by the promising results in the network coding research, we focus in this thesis on the design of network coding algorithms for simultaneous transmission of multiple data sources in overlay networks. We investigate several problems that arise in the context of inter-session network coding, namely (i) decoding delay minimization in inter-session network coding, (ii) distributed rate allocation for inter-session network coding and (iii) correlation-aware decoding of incomplete network coded data. We start by proposing a novel framework for data delivery from multiple sources to multiple clients in an overlay wireline network, where intermediate nodes employ randomized inter-session network coding. We consider networks with high resource diversity, which creates network coding opportunities with possibly large gains in terms of throughput, delay and error robustness. However, the coding operations in the intermediate nodes must be carefully designed in order to enable efficient data delivery. We look at the problem from the decoding delay perspective and design solutions that lead to a small decoding delay at clients through proper coding and rate allocation. We cast the optimization problem as a rate allocation problem, which seeks for the coding operations that minimize the average decoding delay in the client population. We demonstrate the validity of our algorithm through simulations in representative network topologies. The results show that an effective combination of intra- and inter-session network coding based on randomized linear coding permits to reach small decoding delays and to better exploit the available network resources even in challenging network settings. Next, we design a distributed rate allocation algorithm where the users decide locally how many intra- and inter-session network coded packets should be requested from the parent nodes in order to get minimal decoding delay. The capability to take coding decisions locally with only a partial knowledge of the network statistics is of crucial importance for applications where users are organized in dynamic overlay networks. We propose a receiver-driven communication protocol that operates in two rounds. First, the users request and obtain information regarding the network conditions and packet availability in their local neighborhood. Then, every user independently optimizes the rate allocation among different possible intra- and inter-session packet combinations to be requested from its parents. We also introduce the novel concept of equivalent flows, which permits to efficiently estimate the expected number of packets that are necessary for decoding and hence to simplify the rate allocation process. Experimental results indicate that our algorithm is capable of eliminating the bottlenecks and reducing the decoding delay of users with limited resources. We further investigate the application of the proposed distributed rate allocation algorithm to the transmission of video sequences and validate the performance of our system using the NS-3 simulator. The simulation results show that the proposed rate allocation algorithm is successful in improving the quality of the delivered video compared to intra-session network coding based solutions. Finally, we investigate the problem of decoding the source information from an incomplete set of network coded data with the help of source priors in a finite algebraic field. The inability to form a complete decoding system can be often caused by transmission losses or timing constraints imposed by the application. In this case, exact reconstruction of the source data by conventional algorithms such as Gaussian elimination is not feasible; however, partial recovery of the source data may still be possible, which can be useful in applications where approximate reconstruction is informative. We use the statistical characteristics of the source data in order to perform approximate decoding. We first analyze the performance of a hypothetical maximum a posteriori decoder, which recovers the source data from an incomplete set of network coded data given the joint statistics of the sources. We derive an upper bound on the probability of erroneous source sequence decoding as a function of the system parameters. We then propose a constructive solution to the approximate decoding problem and design an iterative decoding algorithm based on message passing, which jointly considers the network coding and the correlation constraints. We illustrate the performance of our decoding algorithm through extensive simulations on synthetic and real data sets. The results demonstrate that, even by using a simple correlation model expressed as a correlation noise between pairs of sources, the original source data can be partially decoded in practice from an incomplete set of network coded symbols. Overall, this thesis addresses several important issues related to the design of efficient data delivery methods with inter-session network coding. Our novel framework for decoding delay minimization can impact the development of practical inter-session network coding algorithms that are appropriate for applications with low delay requirements. Our rate allocation algorithms are able to exploit the high resource diversity of modern networking systems and represent an effective alternative in the development of distributed communication systems. Finally, our algorithm for data recovery from incomplete network coded data using correlation priors can contribute significantly to the improvement of the delivered data quality and provide new insights towards the design of joint source and network coding algorithms.