Concept# Computation

Summary

A computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computations are mathematical equations and computer algorithms.
Mechanical or electronic devices (or, historically, people) that perform computations are known as computers. The study of computation is the field of computability, itself a sub-field of computer science.
Introduction
The notion that mathematical statements should be ‘well-defined’ had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive. A candidate definition was proposed independently by several mathematicians in the 1930s. The best-known variant was formalised by the mathematician Alan Turing, who defined a well-defined statement or calculation as any statement that could be expressed in terms of the initialisation parameters of a Turing Machine. Other (mathematically equivalent) definitions include Alonzo Church's lambda-definability, H

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (100)

Loading

Loading

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people (35)

, , , , , , , , ,

Related concepts (35)

Computer science

Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied

Computer

A computer is a machine that can be programmed to carry out sequences of arithmetic or logical operations (computation) automatically. Modern digital electronic computers can perform generic sets of

Algorithm

In mathematics and computer science, an algorithm (ˈælɡərɪðəm) is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algo

Related units (31)

Francesco Cremonesi, Gerhard Wellein

Big science initiatives are trying to reconstruct and model the brain by attempting to simulate brain tissue at larger scales and with increasingly more biological detail than previously thought possible. The exponential growth of parallel computer performance has been supporting these developments, and at the same time maintainers of neuroscientific simulation code have strived to optimally and efficiently exploit new hardware features. Current state-of-the-art software for the simulation of biological networks has so far been developed using performance engineering practices, but a thorough analysis and modeling of the computational and performance characteristics, especially in the case of morphologically detailed neuron simulations, is lacking. Other computational sciences have successfully used analytic performance engineering, which is based on "white-box," that is, first-principles performance models, to gain insight on the computational properties of simulation kernels, aid developers in performance optimizations and eventually drive codesign efforts, but to our knowledge a model-based performance analysis of neuron simulations has not yet been conducted. We present a detailed study of the shared-memory performance of morphologically detailed neuron simulations based on the Execution-Cache-Memory performance model. We demonstrate that this model can deliver accurate predictions of the runtime of almost all the kernels that constitute the neuron models under investigation. The gained insight is used to identify the main governing mechanisms underlying performance bottlenecks in the simulation. The implications of this analysis on the optimization of neural simulation software and eventually codesign of future hardware architectures are discussed. In this sense, our work represents a valuable conceptual and quantitative contribution to understanding the performance properties of biological networks simulations.

2020This thesis presents a systematic study on the merits and limitations on using pin-by-pin resolution and transport theory based approaches for nuclear core design calculations. Starting from the lattice codes and an optimal cross section generation scheme, it compares different methods, transport approximations, and spatial discretizations used in pin-by-pin homogenized codes.
It is in the interest of nuclear power plant operators to employ more heterogeneous core loadings in order to improve the fuel utilization and decrease the amount of spent fuel. This necessarily increases the requirements on the accuracy of the computation tools used for the core design and safety analysis. One possibility is employing 3D core solvers with higher spatial resolution, e.g. pin-cell wise.
The comparison of several lattice codes indicates that already the proper generation of diffusion coefficients and higher-order scattering moments for pin-cell geometry is not straightforward. Out of three available lattice codes, two generated unphysical diffusion coefficient when using the inscatter approximation, while the last code was not able to provide the higher scattering moments.
Several few-assembly test cases with either high neutron leakage effects or MOX/uranium interfaces showed that the quality of the results of diffusion and SP3 solvers depends critically on the choice of the diffusion coefficient: in the outscatter approximation, it can cause major deviations, while the inscatter seems overall more adequate. Regarding the transport solvers, results obtained with the SN solver DORT showed very good performance for MOX/uranium interfaces, but major deviations occurred for problems with large power gradients. On the other hand, the MOC solver nTRACER showed in all cases small average error, but 2 - 3 % error around the interface of different assemblies.
A study on spatial discretization indicated that the finite difference method applied on pin-cells does not properly capture the big flux changes between MOX and uranium fuel, while the nodal expansion method is more accurate but too slow. It was suggested to use the finite difference method with finer mesh in the outer assembly pin-cells, which increases the required computation time by only 50 % and decreases the pin power errors below 1 % with respect to lattice code results.
Due to some problems which were observed with the available diffusion/SP3 solvers, a new SP3 solver was implemented in the DORT-TD platform. Several core tests showed that the SP3 pin-by-pin solver can significantly outperform the state-of-the-art nodal solver SIMULATE-5, in particular for reactor cores with inserted control rods.
Finally, the pin-by-pin solvers were coupled to a depletion solver. For that, an accurate and fast interpolation routine had to be implemented. The obtained results of full-cycle depletion with complex core loading showed very good performance in comparison to a heterogeneous transport-based fine-group calculations.

Related courses (56)

ME-371: Discretization methods in fluids

Ce cours présente une introduction aux méthodes d'approximation utilisées pour la simulation numérique en mécanique des fluides.
Les concepts fondamentaux sont présentés dans le cadre de la méthode des différences finies puis étendus à celles des éléments finis et spectraux.

CS-308: Quantum computation

The course introduces teh paradigm of quantum computation in an axiomatic way. We introduce the notion of quantum bit, gates, circuits and we treat the most important quantum algorithms. We also touch upon error correcting codes. This course is independent of COM-309.

COM-490: Large-scale data science for real-world data

This hands-on course teaches the tools & methods used by data scientists, from researching solutions to scaling up
prototypes to Spark clusters. It exposes the students to the entire data science pipeline, from data acquisition to
extracting valuable insights applied to real-world problems.

Ömer Sercan Arik, Pascal Frossard, Elif Vural

Efficient solutions for the classification of multi-view images can be built on graph-based algorithms when little information is known about the scene or cameras. Such methods typically require a pairwise similarity measure between images, where a common choice is the Euclidean distance. However, the accuracy of the Euclidean distance as a similarity measure is restricted to cases where images are captured from nearby viewpoints. In settings with large transformations and viewpoint changes, alignment of images is necessary prior to distance computation. We propose a method for the registration of uncalibrated images that capture the same 3D scene or object. We model the depth map of the scene as an algebraic surface, which yields a warp model in the form of a rational function between image pairs. The warp model is computed by minimizing the registration error, where the registered image is a weighted combination of two images generated with two different warp functions estimated from feature matches and image intensity functions in order to provide robust registration. We demonstrate the flexibility of our alignment method by experimentation on several wide-baseline image pairs with arbitrary scene geometries and texture levels. Moreover, the results on multi-view image classification suggest that the proposed alignment method can be effectively used in graph-based classification algorithms for the computation of pairwise distances where it achieves significant improvements over distance computation without prior alignment.

2011Related lectures (71)