This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.
Scientific E notation index: 2 | 3 | 6 | 9 | 12 | 15 | 18 | 21 | 24 | >24
TOC
5×10−1: Computing power of the average human mental calculation for multiplication using pen and paper
1 OP/S: Power of an average human performing calculations using pen and paper
1 OP/S: Computing power of Zuse Z1
5 OP/S: World record for addition set
5×101: Upper end of serialized human perception computation (light bulbs do not flicker to the human observer)
2.2×102: Upper end of serialized human throughput. This is roughly expressed by the lower limit of accurate event placement on small scales of time (The swing of a conductor's arm, the reaction time to lights on a drag strip, etc.)
2×102: IBM 602 1946 computer.
92×103: Intel 4004 First commercially available full function CPU on a chip, released in 1971
500×103 Colossus computer vacuum tube supercomputer 1943
1×106: Computing power of the Motorola 68000 commercial computer introduced in 1979. This is also the minimum computing power of a Type 0 Kardashev civilization.
1.2×106: IBM 7030 "Stretch" transistorized supercomputer 1961
1×109: ILLIAC IV 1972 supercomputer does first computational fluid dynamics problems
1.354×109: Intel Pentium III commercial computing 1999
147.6×109: Intel Core i7-980X Extreme Edition commercial computing 2010
1.34×1012: Intel ASCI Red 1997 Supercomputer
1.344×1012 GeForce GTX 480 in 2010 from Nvidia at its peak performance
4.64×1012: Radeon HD 5970 in 2009 from AMD (under ATI branding) at its peak performance
5.152×1012: S2050/S2070 1U GPU Computing System from Nvidia
11.3×1012 :GeForce GTX 1080 Ti in 2017
13.7×1012: Radeon RX Vega 64 in 2017
15.0×1012: Nvidia Titan V in 2017
80×1012: IBM Watson
170×1012: Nvidia DGX-1 The initial Pascal based DGX-1 delivered 170 teraflops of half precision processing.
478.2×1012 IBM BlueGene/L 2007 Supercomputer
960×1012 Nvidia DGX-1 The Volta-based upgrade increased calculation power of Nvidia DGX-1 to 960 teraflops.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Fugaku 富岳 is a petascale supercomputer at the Riken Center for Computational Science in Kobe, Japan. It started development in 2014 as the successor to the K computer and made its debut in 2020. It is named after an alternative name for Mount Fuji. It became the fastest supercomputer in the world in the June 2020 TOP500 list as well as becoming the first ARM architecture-based computer to achieve this. At this time it also achieved 1.42 exaFLOPS using the mixed fp16/fp64 precision HPL-AI benchmark.
Petascale computing refers to computing systems capable of calculating at least 1015 floating point operations per second (1 petaFLOPS). Petascale computing allowed faster processing of traditional supercomputer applications. The first system to reach this milestone was the IBM Roadrunner in 2008. Petascale supercomputers were succeeded by exascale computers. Floating point operations per second (FLOPS) are one measure of computer performance.
Exascale computing refers to computing systems capable of calculating at least "1018 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (exaFLOPS)"; it is a measure of supercomputer performance. Exascale computing is a significant achievement in computer engineering: primarily, it allows improved scientific applications and better prediction accuracy in domains such as weather forecasting, climate modeling and personalised medicine.
In order to get a better understanding of the interaction of plasma microinstabilities and associated turbulence with specific modes, an antenna is implemented in the global gyrokinetic Particle-In-Cell (PIC) code ORB5.
It consists in applying an external ...
Quantum ESPRESSO is an open-source distribution of computer codes for quantum-mechanical materials modeling, based on density-functional theory, pseudopotentials, and plane waves, and renowned for its performance on a wide range of hardware architectures, ...
Numerical simulations are of a tremendous help to understand the growth of non-linear cosmological structures and how they lead to the formation of galaxies. In recent years, with the goal of improving their prediction power, new hydrodynamical techniques ...