Summary
Exascale computing refers to computing systems capable of calculating at least "1018 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (exaFLOPS)"; it is a measure of supercomputer performance. Exascale computing is a significant achievement in computer engineering: primarily, it allows improved scientific applications and better prediction accuracy in domains such as weather forecasting, climate modeling and personalised medicine. Exascale also reaches the estimated processing power of the human brain at the neural level, a target of the Human Brain Project. There has been a race to be the first country to build an exascale computer, typically ranked in the TOP500 list. In 2022, the world's first public exascale computer, Frontier, was announced. , it is the world's fastest supercomputer. Floating point operations per second (FLOPS) are one measure of computer performance. FLOPS can be recorded in different measures of precision, however the standard measure (used by the TOP500 supercomputer list) uses 64 bit (double-precision floating-point format) operations per second using the High Performance LINPACK (HPLinpack) benchmark. Whilst a distributed computing system had broken the 1 exaFLOPS barrier before Frontier, the metric typically refers to single computing systems. Supercomputers had also previously broken the 1 exaFLOPS barrier using alternative precision measures; again these do not meet the criteria for exascale computing using the standard metric. It has been recognised that HPLinpack may not be a good general measure of supercomputer utility in real world application, however it is the common standard for performance measurement. It has been recognized that enabling applications to fully exploit capabilities of exascale computing systems is not straightforward. Developing data-intensive applications over exascale platforms requires the availability of new and effective programming paradigms and runtime systems.
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (1)
COM-490: Large-scale data science for real-world data
This hands-on course teaches the tools & methods used by data scientists, from researching solutions to scaling up prototypes to Spark clusters. It exposes the students to the entire data science pipe
Related lectures (17)
Superconducting Digital Electronics: Advantages and Challenges
Explores superconducting digital electronics, focusing on speed, power consumption, and historical developments in Cryotron and Josephson logic.
Big Data Best Practices and Guidelines
Covers best practices and guidelines for big data, including data lakes, architecture, challenges, and technologies like Hadoop and Hive.
Big Data: Best Practices and Guidelines
Covers best practices and guidelines for big data, including data lakes, typical architecture, challenges, and technologies used to address them.
Show more
Related publications (23)

Multiscale biomolecular simulations in the exascale era

Ursula Röthlisberger, Simone Meloni

The complexity of biological systems and processes, spanning molecular to macroscopic scales, necessitates the use of multiscale simulations to get a comprehensive understanding. lar dynamics (MD) simulations are crucial for capturing processes beyond the ...
Current Biology Ltd2024

Towards Efficient and Accurate Numerical Simulations of Galaxies using Task-based Parallelism and Application to Dwarf Galaxies

Loïc Hausammann

Numerical simulations are of a tremendous help to understand the growth of non-linear cosmological structures and how they lead to the formation of galaxies. In recent years, with the goal of improving their prediction power, new hydrodynamical techniques ...
EPFL2021
Show more
Related concepts (14)
Lustre (file system)
Lustre is a type of parallel , generally used for large-scale cluster computing. The name Lustre is a portmanteau word derived from Linux and cluster. Lustre file system software is available under the GNU General Public License (version 2 only) and provides high performance file systems for computer clusters ranging in size from small workgroup clusters to large-scale, multi-site systems. Since June 2005, Lustre has consistently been used by at least half of the top ten, and more than 60 of the top 100 fastest supercomputers in the world, including the world's No.
Petascale computing
Petascale computing refers to computing systems capable of calculating at least 1015 floating point operations per second (1 petaFLOPS). Petascale computing allowed faster processing of traditional supercomputer applications. The first system to reach this milestone was the IBM Roadrunner in 2008. Petascale supercomputers were succeeded by exascale computers. Floating point operations per second (FLOPS) are one measure of computer performance.
Tianhe-1
Tianhe-I, Tianhe-1, or TH-1 (, tian1he2-yi1hao4; Sky River Number One) is a supercomputer capable of an Rmax (maximum range) of 2.5 peta FLOPS. Located at the National Supercomputing Center of Tianjin, China, it was the fastest computer in the world from October 2010 to June 2011 and was one of the few petascale supercomputers in the world. In October 2010, an upgraded version of the machine (Tianhe-1A) overtook ORNL's Jaguar to become the world's fastest supercomputer, with a peak computing rate of 2.
Show more