Concept# Complexité irréductible

Résumé

La complexité irréductible est la thèse selon laquelle certains systèmes biologiques sont trop complexes pour être le résultat de l'évolution de précurseurs plus simples ou « moins complets », du fait de mutations au hasard et de la sélection naturelle. Le terme a été inventé et défini en 1996 par le professeur de biochimie Michael Behe, un système de complexité irréductible étant . Les exemples cités par Behe, la coagulation en cascade, le moteur (ou corps basal) des flagelles cellulaires et le système immunitaire, ne pourraient donc être le résultat de l'évolution naturelle : tout système précurseur au système complet ne fonctionnerait pas, et ne constituerait donc pas un avantage sélectif.
De façon plus générale, cet argument est utilisé par les partisans du créationnisme et du dessein intelligent pour réfuter la théorie scientifique actuelle de l'évolution et prouver l'implication d'une cause divine ou intelligente dans la création de la vie. Ces thèses sont anciennes et rep

Source officielle

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Publications associées

Chargement

Personnes associées

Chargement

Unités associées

Chargement

Concepts associés

Chargement

Cours associés

Chargement

Séances de cours associées

Chargement

Publications associées (29)

Personnes associées

Aucun résultat

Chargement

Chargement

Chargement

Unités associées

Aucun résultat

Cours associés (3)

MATH-334: Representation theory

Study the basics of representation theory of groups and associative algebras.

PHYS-314: Quantum physics II

L'objectif de ce cours est de familiariser l'étudiant avec les concepts, les méthodes et les conséquences de la physique quantique. En particulier, le moment cinétique, la théorie de perturbation, les systèmes à plusieurs particules, les symétries, et les corrélations quantique seront traité

MICRO-413: Advanced additive manufacturing technologies

Advanced 3D forming techniques for high throughput and high resolution (nanometric) for large scale production. Digital manufacturing of functional layers, microsystems and smart systems.

Concepts associés (3)

Dessein intelligent

Le dessein intelligent (intelligent design en anglais) est une théorie pseudo-scientifique selon laquelle
Cette thèse a notamment été développée par le Discovery Institute, un cercle de réflexion

Créationnisme

vignette|Le Premier Jour de la création, Chronique de Nuremberg, 1493. Dans la Bible, Dieu « fait exister » le monde.
vignette|Dieu créant les animaux, illustration médiévale de 1445 (Allemagne).
Le

Argument téléologique (religion)

Largument téléologique, ou argument du dessein divin', est l'argument sur l'existence de Dieu qui se base sur des preuves perceptibles d'ordre, d'intention, de conception ou de direction - ou

Séances de cours associées (6)

Mass is one of the crucial parameters for hardware that has to be placed in Earth orbit. Due to its harsh environment, a material with highest specific properties is desired to achieve space missions. The rise and development of new technologies, such as additive manufacturing (AM), opened new opportunities in part-design complexity, periodic cellular structures (PCS) being one of them. The present thesis investigates the potential implementation of PCS in space applications, particularly for structures and micro-meteoroids and orbital debris (MMOD) impact shields. This was achieved in three steps:
Four different types of AlSi12 PCS manufactured by selective laser melting (SLM) were tested under quasi-static compression to measure the mechanical properties dependency versus topology and to characterize the failure mode. Properties ranging from 3 to 4 GPa for the compressive modulus, 5 to 12 MPa for the yield stress, 12 to 20 MPa for the plateau stress, and 2 to 8 MJ/cm3 for the absorbed energy were obtained. An unexpected failure mode was observed when compared to classical cellular metals, namely a brittle failure occurring by global shearing. A predictive failure criterion was established based on topology considerations and correlated to most of the reported results in the literature. A preliminary test campaign on tensile specimens was performed to compute numerical models that were fed into a finite element analysis. Good agreement with experimental data was shown, and the importance of microplasticity effects in this class of material was highlighted.
An alternative process was developed to produce AlSi1 PCS by investment casting. The process is based on replication of a polymer preform used to build a NaCl mold. It was observed that the quality of the final cast part depends mainly on the grain size of the salt, with an optimum identified for distributions between 125 and 180 um. Optimization of the process allowed to reduce the drying time by a factor 6. Main process parameters include a drying temperature of 80C and infiltration at 660C under 300 mbar. From this process, PCSs having an energy absorption capacity of 15 MJ/m3 with an efficiency of 80% were produced.
Hypervelocity impact tests were conducted on cast PCS and stochastic structures. The objective being to hit the structures with a 2mm-diameter aluminum sphere at velocities close to 7 km/s. Influence of the sample topology, the orientation, and the bumper material was assessed. Stochastic structures successfully stopped the projectile in all configurations. The beneficial effect of the bumper was measured reducing the crater depth from 20 mm to 14 mm. This type of structure exhibited a comparable areal density (0.8 g/cm2) to simple Whipple shield design. PCS poorly performed in mitigating the impact as the debris passed through all the structures, independently of the test configuration due to the open-channels present.
PCS are good candidates to be used in space hardware, but their design and the manufacturing process need to be carefully chosen depending on the specific application. AM PCS are suitable for structural application with a high compressive modulus and yield stress. Cast PCS would perfectly fit in shock absorbers. A more random design would be preferable for MMOD shielding applications.

Large hybrid objects integrating multiple functions and whose scale, over 100'000 m2, is halfway between the fragment of a city and that of a large-scale building: these are the key features of the complex projects that are triggering new debates on the subject of Design Complexity, usually referred to as Big Buildings. Recurrently bearing the predominance of housing, the paradigms of these mixed architectural forms - « cities within cities » developed either horizontally or vertically - may recall Le Corbusier's Unités d'Habitation, or even the American Hybrids, due to the multiple simultaneous conditions they succeed to accommodate within their generic envelopes. In Europe, either as a consequence of the urban densification and the resulting need for restraining the urban sprawl, or driven by political and economical dynamics of speculation and globalization, we are assisting to the echoed emergence of projects of this kind, often located nearby mobility interfaces or in old industrial areas in process of regeneration, normally with privileged connections to the city centres. The development of such projects is hypothetically becoming a trigger for the creation of new ways of producing collective housing, integrated within a more complex system of activities, bearing new strategies of articulation between housing and other programs, new models of public space, new typological experiments on the dwellings or new ways of invigorating social mix. Our intention is to investigate how effective is, then, the planning of collective housing within the massiveness of the Big Building and how can the design of housing inside this specific milieu generate new potentials and new knowledge in the architectural domain of Housing.

This thesis describes a novel digital background calibration scheme for pipelined ADCs with nonlinear interstage gain. Errors caused by the nonlinear gains are corrected in real-time by adaptively post-processing the digital stage outputs. The goal of this digital error correction is to improve the power efficiency of high-precision analog-to-digital conversion by relaxing the linearity and matching constraints on the analog pipeline stages and compensating the resulting distortion through digital post-processing. This approach is motivated by the observation that technology scaling reduces the energy cost of digital signal processing and at the same time makes high-precision analog signal processing harder because of reduced intrinsic device gain and reduced voltage headroom. In particular, the proposed calibration approach enables the use of power efficient circuits in noise-limited high-resolution, high-speed converters. Alternative stage circuit topologies that are more power efficient than their traditional counterparts are typically too nonlinear and too sensitive to temperature and bias variations to be employed in the critical stages of such converters without adaptive error correction. The proposed calibration scheme removes the effects of nonlinear interstage gain, sub-DAC nonlinearity, and mismatch between reference voltages of different stages. Gain errors and reference voltage mismatch are continuously tracked during normal operation and may thus be time-varying. Sub-DAC nonlinearity is assumed to be constant. A method to characterize the time-invariant non-ideal sub-DAC characteristics during an initial one-time offline calibration phase is proposed. Because the method only uses the existing uncalibrated analog hardware, it can only determine the relative sizes of the DAC error terms. One or two scale factors per sub-DAC remain to be estimated by the adaptation algorithm used to track the time-varying gain parameters. Because the scale factor is constant, it can be excluded from adaptation after its estimate has converged. This offline characterization of sub-DACs ensures that the entire characteristic of all sub-DACs can be estimated, and that calibration of DAC errors can be permanently turned off after initial convergence. Furthermore, it eliminates degrees of freedom in the error correction function, and fixes the gain of the calibrated ADC. The digital postprocessor linearizes the ADC transfer characteristic by applying an adaptive inverse model of the analog signal path to the digital outputs of the pipeline stages. The model uses piecewise linear (PWL) functions to approximate the inverse of the nonlinear stage gains. Previously reported background calibration methods are limited to low-order polynomial gain models. The PWL model is more general than low order polynomial models. The analog signal path can thus be optimized for power efficiency without any constraint on high order distortion. The previously reported split-ADC architecture is used to enable background adaptation of the error correction parameters during normal converter operation and without requiring an accurate reference ADC. The converter to be calibrated is split into two nominally identical channels, both channels processing the same input signal. The average of the outputs of the two channels is used as overall output. The difference of the channel outputs is used as an error signal. The mean-square value of this error signal serves as the performance function that is minimized by the adaptation algorithm. Because two non-ideal ADCs are used as reference channels for each other, precautions are needed to avoid that the adaptation algorithm simply equalizes the transfer characteristics of the two ADCs. The effect of the flexible gain model on these parasitic solutions is analyzed. A previously reported method to eliminate parasitic solutions in the case of linear gains is modified to also work with arbitrary nonlinear gain. A simplified version of the normalized least-mean-squares (NLMS) algorithm is used for parameter adaptation. Normalization assumes that the performance function is quadratic in the parameters, which is almost true because the channel output difference is almost linear in the error correction parameters. Because a low-noise reference signal is used, the LMS loop does not need to filter out noise. The normalization in conjunction with the low-noise reference signal significantly mitigates the convergence speed versus steady-state error trade-off. Heuristic strategies to control the NLMS algorithm are proposed to address identified weaknesses of the basic adaptation algorithm. The main benefits of the heuristic control are faster initial convergence and faster recovery from transient disturbances. Fast initial convergence is achieved by gradually increasing the granularity of the PWL gain models. Fast recovery after fast parameter changes is achieved by selectively reducing the search space for certain samples. A possible architecture for a hardware implementation of the postprocessor is analyzed to demonstrate the practicability of the proposed digital error correction scheme, and to propose detailed architectures for critical blocks. The analysis concludes that using the proposed architecture, the hardware implementation poses no specific difficulty in terms of area, power, or design complexity. A novel approach for adaptive nonlinear digital error correction for pipelined ADCs is proposed. The error correction models amplifier nonlinearity as a general piecewise linear function for maximum flexibility. The algorithm can be implemented using simple arithmetic and a small amount of memory only.