**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# Surface parameterization and optimum design methodology for hydraulic turbines

Thèse EPFL

Résumé

This thesis presents a methodology for the design optimization of hydraulic runner blades. The originality of the methodology comes from the geometric definition of the blade shapes, which uses parametric surfaces instead of a set of profiles. The main advantage of using surfaces is the number of parameters required. The use of surfaces requires a different technique for the blade construction when compared to traditional approaches. NURBS surfaces are used for the geometric representation, the properties provided by such parametric formulation permit to reach the necessary flexibility and accuracy to be attained. Moreover, the surface approach can be seen as a way to liberate the blade design from traditional discrete sectional approaches. Actual blade optimization procedures are a compromise between the quality of the design, its performance analysis and the subsequent computational time-consumed effort. The improvements provided by the above mentioned geometric definition allow the use of more realistic analysis tools for the design evaluation. Thus, Navier-Stokes (k – ε) simulations are integrated in a simple and direct optimization process. The resulting methodology is not penalized by the time-effort required and becomes of interest for its use in industrial applications. Finally, we supply a number of examples to demonstrate the feasibility of the optimization proposal. These examples illustrate the application of the methodology at different levels of geometric complexity. They are interesting not only through the results obtained, but also because they become acceptable in terms of time consumed on daily and industrial applications.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Concepts associés (18)

Méthodologie

La méthodologie est l'étude de l'ensemble des méthodes scientifiques. Elle peut être considérée comme la science de la méthode, ou « méthode des méthodes » (comme il y a une métalinguistique ou ling

Turbine hydraulique

Une turbine hydraulique est une machine tournante qui produit une énergie mécanique à partir d'eau en mouvement (cours d'eau ou marée) ou potentiellement en mouvement (barrage). Elle constitue le comp

Temps

thumb|Chronos, dieu du temps de la mythologie grecque, par Ignaz Günther, Bayerisches Nationalmuseum à Munich.
vignette|Montre à gousset ancienne
Le temps est une notion qui rend compte du changement

Publications associées (14)

Chargement

Chargement

Chargement

The invention of the integrated circuit and the manufacturing progress as well as continuing progress in the manufacturing process are the fundamental engines for the implementation of all technologies that support today's information society. The vast majority of microelectronic applications presented nowadays use the well-established CMOS process and fabrication technology which exhibit high reliability rates. The hypothesis of reliable components has mostly been taken in the development of electronic systems fabricated in the past four decades. The steady downscaling of CMOS technology has led to the development of devices with nanometer dimensions. For future nano-circuits, emerging nanodevices and their associated interconnects, the expected higher probabilities of failures, as well as the higher sensitivities to noise and variations, could make future chips prohibitively unreliable. The systems to be fabricated will be made of unreliable components and achieving 100% correctness will not be only extremely costly, but might be plainly impossible. The global picture is that reliability emerges as one of the most significant threats to the design of future integrated computing systems. Building reliable systems out of unreliable components will require increased cooperative involvement of the logic designers and architects, where high-level techniques will rely upon lower levels support based on novel modeling including component and system reliability as design parameters. An architecture suitable for circuit-level and gate-level redundant modules and exhibiting significant immunity to permanent and random failures, as well as unwanted fluctuation of the fabrication parameters is presented, which is based on a four-layer feed-forward topology, using averaging and thresholding as the core voter mechanisms. The architecture with both fixed and adaptable threshold is compared to triple and R-fold modular redundancy techniques, and its superiority is demonstrated based on numerical simulations as well as analytical developments. A chip implementation of the architecture is realized. Other applications of the architecture like delay variations minimization are identified and explored. A novel general method enabling introduction of fault-tolerance, and evaluation of circuit and architecture reliability is proposed. The method is based on the modeling of probability density functions (PDFs) of unreliable components and their subsequent evaluation for a given reliability architecture. PDF modeling, presented for the first in the context of realistic technology and arbitrary circuit size, is based on a cutting-edge reliability evaluation algorithm and offers scalability, speed and accuracy. Fault modeling has also been developed to support PDF modeling. In the second part of the thesis a new methodology that introduces reliability in existing design flows is proposed. The methodology consists of partitioning the whole system into reliability-optimal partitions and applying reliability evaluation and optimization at local and system level. System level reliability improvement of different fault-tolerant techniques is studied in depth. Optimal partition size analysis and redundancy optimization have been performed for the first time in the context of a large-scale system, showing that a target reliability can be achieved with low to moderate redundancy factors (R < 50) even for high defect densities (device failure rate up to 10-3). The optimal window of application of each fault-tolerant technique with respect to defect density is presented as a way to find the optimum design trade-off between the reliability and power/area. R-fold modular redundancy with distributed voting and averaging voter is selected as the most promising candidate for the implementation in trillion-transistor logic systems. Finally, a realistic circuit example of the methodology implementation is verified using simulations.

Until now, the preferred solution for MEMS (Micro Electro Mechanical Systems) actuation is the electrostatic one. The main reason is that this kind of actuators can be easily manufactured following the microfabrication rules, as their geometry perfectly fits to the characteristics of this technology. On the other hand, electromagnetic systems are rarely developed at small scale. Two explanations can be given. First, ferromagnetic materials are not available in standard cleanroom processes and secondly, adapting the typical three-dimensional geometry of electromagnetic drives to a planar technology is quite difficult. This thesis addresses the design of a new electromagnetic MEMS micromotor. The aim is to develop a new motor, which is able to satisfy the specifications of the watchmaker industry. The state of the art and the scaling laws show that a permanent magnet is compulsory to obtain small scale high performances motors. This is one of the reasons why, according to the project specifications, a permanent magnet synchronous motor (BLDC) seems to be the best solution. The designed motor is hybrid because it combines a microfabricated stator and a common magnet obtained with standard macroscopic fabrication processes. Its geometry is characterized by the overlapping of the rotor over the stator and it is well suited to the microsystems manufacturing principle which is based on the design of stacked layers. In order to design the motor, an analytical electromagnetic model has been developed. The accuracy of this mathematical model has been validated by means of finite elements simulations before using it to find the optimal design. Optimization results are very interesting and they demonstrate the suitability of such electromagnetic micromotor for the watch industry. At least, the same performances as the Lavet motor, which actually drives the clock hands, are predicted. Prototypes manufacturing is indispensible for theoretical analysis validation. Moreover, for the current project, this part has a higher importance because the feasibility of the stator microfabrication must also be demonstrated. These components are made following a process flow that has been especially developed for this application by combining several methods available in cleanrooms. It allows carrying out coils with two copper layers. Even if it seems to be complicated and with many steps, a great effort has been made to obtain the simplest and most reliable process flow. Prototypes have been assembled using standard watchmakers bearings. The goals are to validate the theoretical results and to highlight the critical fabrication steps as well as secondary phenomena, which were not considered during the design phase. Once again, the motors characterization demonstrates the great potential of electromagnetic MEMS.

The Navier–Stokes equations play a key role in the modeling of blood flows in the vascular sys- tem. The cost for solving the 3D linear system obtained by Finite Element (FE) discretization of the equations, using tetrahedral unstructured meshes and time advancing finite difference schemes, is very high; to lower the time to solution and to address complex problems, using a parallel framework and developing specific preconditioners become necessary. Important factors to measure parallel performances of a preconditioner are the independence of the number of iterations with respect to the number of parallel tasks (scalability of the precondi- tioner), on the mesh size (optimality), and on the physical parameters (robustness), as well as its properties of strong and weak scalability. We propose a model to explain the effect of nonscalable preconditioners on a parallel iterative solver. Then we propose approximate versions of state of the art preconditioners for the Navier– Stokes equations, namely the Pressure Convection–Diffusion (PCD), Yosida, and SIMPLE preconditioners. We exploit factorizations of the linearized system where inverses are handled using specific core preconditioners such as, e.g., the algebraic additive Schwarz or algebraic multigrid preconditioners. We present new results for the Relaxed Dimensional Factorization (RDF) preconditioner which allows for an automatic parameter tuning. Weak and strong scalability results illustrate the efficiency of our approach using both classical benchmark problems and test cases relevant to hemodynamics simulations, using up to 8192 cores. Then, we extend our preconditioners for the Navier–Stokes equations to Fluid–Structure Interaction problems; we devise preconditioners that rely on physics–specific ad–hoc precon- ditioners for the fluid, the structure and the geometry subproblems. We compare the evolution of the number of iterations to solve the full system with classical methods on a geometry of physiological interest. We also investigate the strong scalability of our FSI solver. Finally, we describe how to develop a C++ flexible Navier–Stokes solver using a policy–based design. In particular we discuss different techniques to access a host class member variables from its compound policies and discuss their main advantages.