**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Error

Summary

An error (from the Latin error, meaning 'wandering') is an action which is inaccurate or incorrect. In some usages, an error is synonymous with a mistake. The etymology derives from the Latin errare, meaning 'to stray'.
In statistics, "error" refers to the difference between the value which has been computed and the correct value. An error could result in failure or in a deviation from the intended performance or behavior.
Human behavior
One reference differentiates between "error" and "mistake" as follows:
An 'error' is a deviation from accuracy or correctness. A 'mistake' is an error caused by a fault: the fault being misjudgment, carelessness, or forgetfulness. Now, say that I run a stop sign because I was in a hurry, and wasn't concentrating, and the police stop me, that is a mistake. If, however, I try to park in an area with conflicting signs, and I get a ticket because I was incorrect on my interpretation of what the signs meant, that would be an error. The first tim

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related publications (100)

Loading

Loading

Loading

Related people (35)

Related concepts (2)

Human error

Human error is an action that has been done but that was "not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits

Numerical analysis

Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathema

Related courses (152)

ME-213: Programmation pour ingénieur

Mettre en pratique les bases de la programmation vues au semestre précédent. Développer un logiciel structuré. Méthode de debug d'un logiciel. Introduction à la programmation scientifique. Introduction à l'instrumentation virtuelle.

MGT-581: Introduction to econometrics

The course provides an introduction to econometrics. The objective is to learn how to make valid (i.e., causal) inference from economic data. It explains the main estimators and present methods to deal with endogeneity issues.

COM-500: Statistical signal and data processing through applications

Building up on the basic concepts of sampling, filtering and Fourier transforms, we address stochastic modeling, spectral analysis, estimation and prediction, classification, and adaptive filtering, with an application oriented approach and hands-on numerical exercises.

Optical tomography has been widely investigated for biomedical imaging applications. In recent years, it has been combined with digital holography and has been employed to produce high quality images of phase objects such as cells. In this Thesis, we look into some of the newest optical Diffraction Tomography (DT) based techniques to solve Three-Dimensional (3D) reconstruction problems and discuss and compare some of the leading ideas and papers. Then we propose a neural-network-based algorithm to solve this problem and apply it on both synthetic and biological samples. Conventional phase tomography with coherent light and off axis recording is performed. The Beam Propagation Method (BPM) is used to model scattering and each x-y plane is modeled by a layer of neurons in the BPM. The network's output (simulated data) is compared to the experimental measurements and the error is used for correcting the weights of the neurons (the refractive indices of the nodes) using standard error back-propagation techniques. The proposed algorithm is detailed and investigated. Then, we look into resolution-conserving regularization and discuss a method for selecting regularizing parameters. In addition, the local minima and phase unwrapping problems are discussed and ways of avoiding them are investigated. It is shown that the proposed learning tomography (LT) achieves better performance than other techniques such as, DT especially when insufficient number or incomplete set of measurements is available. We also explore the role of regularization in obtaining higher fidelity images without losing resolution. It is experimentally shown that due to overcoming multiple scattering, the LT reconstruction greatly outperforms the DT when the sample contains two or more layers of cells or beads. Then, reconstruction using intensity measurements is investigated. 3D reconstruction of a live cell during apoptosis is presented in a time-lapse format. At the end, we present a final comparison with leading papers and commercially available systems. It is shown that -compared to other existing algorithms- the results of the proposed method have better quality. In particular, parasitic granular structures and the missing cone artifact are improved. Overall, the perspectives of our approach are pretty rich for high-resolution tomographic imaging in a range of practical applications.

Related units (27)

A phase field model describing the solidification of a binary alloy is investigated. The location of the solid and liquid phases in the computational domain is described by introducing an order parameter, the phase-field, which varies smoothly from one in the solid to zero in the liquid through a slightly diffused interface. The solidification process of binary alloys is controlled by the local concentration of the alloy and the temperature. The concentration is altered by the existing flows in the melt. With temperature being a given constant, the model corresponds to coupling the phase-field equation, the concentration equation and the compressible Navier-Stokes equations. The main difficulty when solving numerically phase field models is due to the very rapid change of the phase field and the concentration field across the diffused interface, whose thickness has to be taken very small in comparison to the dimension of the computational domain in order to correctly capture the physics of the phase transformation. A high spatial resolution is therefore needed to describe the smooth transition. In this work, we present a physical model governing the solidification process. In order to reduce the number of grid points required for the reliable simulations, we introduce an adaptive algorithm that aims to build successive meshes with large aspect ratio such that the relative estimated error of the concentration and/or velocity in the H1-norm is close to a preset tolerance TOL. For this purpose, we introduce error indicators which measure the error of the concentration and the velocity in the directions of maximum and minimum stretching of the element. Finally, we apply our method to 2D and 3D simulations of the dendritic growth proving its efficiency.

Related lectures (446)

Hypothesis: Nowadays, the number of people suffering from shoulder osteoarthritis increases as the population is ageing. An end-stage treatment is the total shoulder arthroplasty (TSA), but it still suffers from a high failure rate in comparison to other joints arthroplasty. To better understand the causes and the mechanisms of this high failure rate, researchers tend to build patient-specific model. To build these models, a workflow composed of different steps has to be carried out. The first step is the segmentation process, which allows to extract the geometry of the patient scapula. Different methods of segmentation are used and two of them were investigated. Indeed, it has been hypothesise that the uncertainty in the segmentation can translate into a larger one in the modelling outputs. The quantification of such errors has never been done. The goal was to estimate the errors between the two segmentation methods, which are, the "manual" and the "semi-automated" ones. Methods: The two segmentation methods are applied on one cadaveric scapula. The manual segmentation is realised thanks to thresholds values and manual adjustments, while for the semiautomated segmentation, the cortical bone is extracted by the use of thresholds values and manual adjustments, but the trabecular bone is obtained by a shrunk of 3mm of the cortical contour segmented manually. Then, each bone geometry obtained were implanted, and a FE model was build for each of them. The exactly same steps were applied to each bone geometry in the steps following the segmentation process, to influence as less as possible the error estimation. The error was estimated by comparing the modelling outputs of both models. Results: The semi-automated segmented bone geometry went through all the steps and the FE outputs were as expected. The manual segmentation suffered from invalid geometries and no proper mesh could have been generated due to the extreme thin cortical thickness in the glenoid cavity. No errors estimation was then performed. It was remarked that the difference of segmented volume between the two methods was important. Conclusion: The semi-automated segmentation process is an easy and fast method to implement. The manual segmentation is extremely time consuming and the build up of the FE model is more challenging, but more accurate. The huge different in segmented volume makes believe that the segmentation process influences the modelling outputs. The comparison of the two methods should be made on more scapulae to drawn global conclusions.

2018