**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# Coevolutionary fuzzy modeling

Résumé

This thesis presents Fuzzy CoCo, a novel approach for system design, conducive to explaining human decisions. Based on fuzzy logic and coevolutionary computation, Fuzzy CoCo is a methodology for constructing systems able to accurately predict the outcome of a human decision-making process, while providing an understandable explanation of the underlying reasoning. Fuzzy logic provides a formal framework for constructing systems exhibiting both good numeric performance (precision) and linguistic representation (interpretability). From a numeric point of view, fuzzy systems exhibit nonlinear behavior and can handle imprecise and incomplete information. Linguistically, they represent knowledge in the form of rules, a natural way for explaining decision processes. Fuzzy modeling —meaning the construction of fuzzy systems— is an arduous task, demanding the identification of many parameters. This thesis analyses the fuzzy-modeling problem and different approaches to coping with it, focusing on evolutionary fuzzy modeling —the design of fuzzy inference systems using evolutionary algorithms— which constitutes the methodological base of my approach. In order to promote this analysis the parameters of a fuzzy system are classified into four categories: logic, structural, connective, and operational. The central contribution of this work is the use of an advanced evolutionary technique —cooperative coevolution— for dealing with the simultaneous design of connective and operational parameters. Cooperative coevolutionary fuzzy modeling succeeds in overcoming several limitations exhibited by other standard evolutionary approaches: stagnation, convergence to local optima, and computational costliness. Designing interpretable systems is a prime goal of my approach, which I study thoroughly herein. Based on a set of semantic and syntactic criteria, regarding the definition of linguistic concepts and their causal connections, I propose a number of strategies for producing more interpretable fuzzy systems. These strategies are implemented in Fuzzy CoCo, resulting in a modeling methodology providing high numeric precision, while incurring as little a loss of interpretability as possible. After testing Fuzzy CoCo on a benchmark problem —Fisher's Iris data— I successfully apply the algorithm to model the decision processes involved in two breast-cancer diagnostic problems: the WBCD problem and the Catalonia mammography interpretation problem. For the WBCD problem, Fuzzy CoCo produces systems both of high performance and high interpretability, comparable (if not better) than the best systems demonstrated to date. For the Catalonia problem, an evolved high-performance system was embedded within a web-based tool —called COBRA— for aiding radiologists in mammography interpretation. Several aspects of Fuzzy CoCo are thoroughly analyzed to provide a deeper understanding of the method. These analyses show the consistency of the results. They also help derive a stepwise guide to applying Fuzzy CoCo, and a set of qualitative relationships between some of its parameters that facilitate setting up the algorithm. Finally, this work proposes and explores preliminarily two extensions to the method: Island Fuzzy CoCo and Incremental Fuzzy CoCo, which together with the original CoCo constitute a family of coevolutionary fuzzy modeling techniques. The aim of these extensions is to guide the choice of an adequate number of rules for a given problem. While Island Fuzzy CoCo performs an extended search over different problem sizes, Incremental Fuzzy CoCo bases its search power on a mechanism of incremental evolution.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Concepts associés (20)

Logique floue

La logique floue (fuzzy logic, en anglais) est une logique polyvalente où les valeurs de vérité des variables — au lieu d'être vrai ou faux — sont des réels entre 0 et 1. En ce sens, elle étend la l

Système intelligent flou

Un système intelligent flou (SIF) est un système qui intègre (implémente) de l’expertise humaine et qui vise à automatiser (imiter) le raisonnement d’experts humains face à des systèmes complexes. I

Logique

La logique — du grec , qui est un terme dérivé de signifiant à la fois « raison », « langage » et « raisonnement » — est, dans une première approche, l'étude de l'inférence, c'est-à-dire des règle

Publications associées (8)

Chargement

Chargement

Chargement

This thesis presents the development of a new multi-objective optimisation tool and applies it to a number of industrial problems related to optimising energy systems. Multi-objective optimisation techniques provide the information needed for detailed analyses of design trade-offs between conflicting objectives. For example, if a product must be both inexpensive and high quality, the multi-objective optimiser will provide a range of optimal options from the cheapest (but lowest quality) alternative to the highest quality (but most expensive), and a range of designs in between – those that are the most interesting to the decision-maker. The optimisation tool developed is the queueing multi-objective optimiser (QMOO), an evolutionary algorithm (EA). EAs are particularly suited to multi-objective optimisation because they work with a population of potential solutions, each representing a different trade-off between objectives. EAs are ideal to energy system optimisation because problems from that domain are often non-linear, discontinuous, disjoint, and multi-modal. These features make energy system optimisation problems difficult to resolve with other optimisation techniques. QMOO has several features that improve its performance on energy systems problems – features that are applicable to a wide range of optimisation problems. QMOO uses cluster analysis techniques to identify separate local optima simultaneously. This technique preserves diversity and helps convergence to difficult-to-find optima. Once normal dominance relations no longer discriminate sufficiently between population members certain individuals are chosen and removed from the population. Careful choice of the individuals to be removed ensures that convergence continues throughout the optimisation. Preserving of the "tail regions" of the population helps the algorithm to explore the full extent of the problem's optimal regions. QMOO is applied to a number of problems: coke factory placement in Shanxi Province, China; choice of heat recovery system operating temperatures; design of heat-exchanger networks; hybrid vehicle configuration; district heating network design, and others. Several of the problems were optimised previously using single-objective EAs. QMOO proved capable of finding entire ranges of solutions faster than the earlier methods found a single solution. In most cases, QMOO successfully optimises the problems without requiring any specific tuning to each problem. QMOO is also tested on a number of test problems found in the literature. QMOO's techniques for improving convergence prove effective on these problems, and its non-tuned performance is excellent compared to other algorithms found in the literature.

Machine intelligence greatly impacts almost all domains of our societies. It is profoundly changing the field of mechanical engineering with new technical possibilities and processes. The education of future engineers also needs to adapt in terms of techniques and even skills.
Using the design of electro-mechanical actuators as a common thread, this work explores the many-facets of automated design: modeling, optimization, and education, and looks for the prerequisites essential to its successful application.
The journey starts by building a modular and integrated model. It focuses on the prediction of system-level specifications that yield high added-value for decision-makers and shorten the path from the model to the final product. Combined with multi-objective evolutionary algorithms (MOEAs) and visualization tools, the model forms an automated design tool that helps engineers and decision-makers to rapidly get important insights into their design task. Its potential and benefits are validated through two specific applications. The results, however, also highlight a gap between the reported performance of optimizers on common benchmark problems and the actual performance on these problems.
To further develop optimizers, appropriate and realistic benchmark problems are needed. A subset of the integrated design model is used to formulate a new test suite called MODAct, composed of 20 constrained multi-objective optimization problems (CMOPs) with variable levels of complexity. In addition, numerical approaches to evaluate the constraint landscape of CMOPs are introduced and applied to identify the differences in features of MODAct against 45 benchmark problems from literature. Further, the convergence performance of three algorithms on the same problems highlights the key role of constraints and, in particular, the number of simultaneously violated constraints in MODAct problems.
In a next step, existing constraint handling strategies suitable for MOEAs along with a newly proposed technique for many-constraint problems are evaluated. Their parameters are tuned for different problems. The performance of the various configurations further highlights the difference between MODAct and other benchmark problems and show the highly competitive results of the proposed constraint handling technique on realistic design problems.
As the technical limits are removed, the impact of automated design on the work of future engineers should be considered. On the one hand, the development of professional skills by students working on team project in different settings has been evaluated thanks to 205 students from three classes. Explicitly addressing these skills within the project seems key to support stronger and broader learning, suggesting changes that do not require a full curriculum redesign. On the other hand, nine groups (33 students) have been asked to design an actuator using a conventional approach followed by an automated design approach. The actuators suggested by students using the automated tool outperform the designs obtained through the traditional approach. Six groups even suggest solutions cheaper, three of which are also smaller, than the product of experienced industry engineers. Students proved thus capable of leveraging the tool within a short time. The analysis of their mistakes suggests possible improvements for future tools. As these students leave university, they carry the hope to see such methods spread in industry.

Two of the most basic problems encountered in numerical optimization are least-squares problems and systems of nonlinear equations. The use of more and more complex simulation tools on high performance computers requires solving problems involving an increasingly large number of variables. The main thrust of this thesis the design of new algorithmic methods for solving large-scale instances of these two problems. Although they are relevant in many different applications, we concentrate specifically on real applications encountered in the context of Intelligent Transportation Systems to illustrate their performances. First we propose a new approach for the estimation and prediction of OriginDestination tables. This problem is usually solved using a Kalman filter approach, which refers to both formulation and resolution algorithm. We prefer to consider a explicit least-squares formulation. It offers convenient and flexible algorithms especially designed to solve largescale problems. Numerical results provide evidence that this approach requires significantly less computation effort than the Kalman filter algorithm. Moreover it allows to consider larger problems, likely to occur in real applications. Second a new class of quasi-Newton methods for solving systems of nonlinear equations is presented. The main idea is to generalize classical methods by building a model using more than two previous iterates. We use a least-squares approach to calibrate this model, as exact interpolation requires a fixed number of iterates, and may be numerically problematic. Based on classical assumptions we give a proof of local convergence of this class of methods. Computational comparisons with standard quasi-Newton methods highlight substantial improvements in terms of robustness and number of function evaluations. We derive from this class of methods a matrix-free algorithm designed to solve large-scale systems of nonlinear equations without assuming any particular structure on the problems. We have successfully tried out the method on problems with up to one million variables. Computational experiments on standard problems show that this algorithm outperforms classical large-scale quasi-Newton methods in terms of efficiency and robustness. Moreover, its numerical performances are similar to Newton-Krylov methods, currently considered as the best to solve large-scale systems of equations. In addition, we provide numerical evidence of the superiority of our method for solving noisy systems of nonlinear equations. This method is then applied to the consistent anticipatory route guidance generation. Route guidance refers to information provided to travelers in an attempt to facilitate their decisions relative to departure time, travel mode and route. We are specifically interested in consistent anticipatory route guidance, in which real-time traffic measurements are used to make short-term predictions, involving complex simulation tools, of future traffic conditions. These predictions are the basis of the guidance information that is provided to users. By consistent, we mean that the anticipated traffic conditions used to generate the guidance must be similar to the traffic conditions that the travelers are going to experience on the network. The problem is tricky because, contrarily to weather forecast where the real system under consideration is not affected by information provision, the very fact of providing travel information may modify the future traffic conditions and, therefore, invalidate the prediction that has been used to generate it. Bottom (2000) has proposed a general fixed point formulation of this problem with the following characteristics. First, as guidance generation involves considerable amounts of computation, this fixed point problem must be solved quickly and accurately enough for the results to be timely and of use to drivers. Secondly the unavailability of a closed-form objective function and the presence of noise due to the use of simulation tools prevent from using classical algorithms. A number of simulation experiments based on two system software including DynaMIT a state-of-the-art, real-time computer system for traffic estimation and prediction, developed at the Intelligent Transportation Systems Program of the Massachusetts Institute of Technology (MIT), have been run. These numerical results underline the good behavior of our large-scale method compared to classical fixed point methods for solving the consistent anticipatory route guidance problem. We close with some comments about future promising directions of research.