In computer architecture, speedup is a number that measures the relative performance of two systems processing the same problem. More technically, it is the improvement in speed of execution of a task executed on two similar architectures with different resources. The notion of speedup was established by Amdahl's law, which was particularly focused on parallel processing. However, speedup can be used more generally to show the effect on performance after any resource enhancement.
Speedup can be defined for two different types of quantities: latency and throughput.
Latency of an architecture is the reciprocal of the execution speed of a task:
where
v is the execution speed of the task;
T is the execution time of the task;
W is the execution workload of the task.
Throughput of an architecture is the execution rate of a task:
where
ρ is the execution density (e.g., the number of stages in an instruction pipeline for a pipelined architecture);
A is the execution capacity (e.g., the number of processors for a parallel architecture).
Latency is often measured in seconds per unit of execution workload. Throughput is often measured in units of execution workload per second. Another unit of throughput is instructions per cycle (IPC) and its reciprocal, cycles per instruction (CPI), is another unit of latency.
Speedup is dimensionless and defined differently for each type of quantity so that it is a consistent metric.
Speedup in latency is defined by the following formula:
where
Slatency is the speedup in latency of the architecture 2 with respect to the architecture 1;
L1 is the latency of the architecture 1;
L2 is the latency of the architecture 2.
Speedup in latency can be predicted from Amdahl's law or Gustafson's law.
Speedup in throughput is defined by the formula:
where
Sthroughput is the speedup in throughput of the architecture 2 with respect to the architecture 1;
Q1 is the throughput of the architecture 1;
Q2 is the throughput of the architecture 2.
We are testing the effectiveness of a branch predictor on the execution of a program.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. It states that "the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used". It is named after computer scientist Gene Amdahl, and was presented at the American Federation of Information Processing Societies (AFIPS) Spring Joint Computer Conference in 1967.
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling.
Multiprocessors are a core component in all types of computing infrastructure, from phones to datacenters. This course will build on the prerequisites of processor design and concurrency to introduce
Multiprocessors are now the defacto building blocks for all computer systems. This course will build upon the basic concepts offered in Computer Architecture I to cover the architecture and organizati
Related lectures (22)
, , , , ,
In order to develop sustainable and more powerful information technology (IT) infrastructures, the challenges posed by the "memory wall" are critical for the design of high-performance and high-efficiency many-core computing systems. In this context, recen ...
2022
,
Strong gravitational lensing is a powerful probe of cosmology and the dark matter distribution. Efficient lensing software is already a necessity to fully use its potential and the performance demands will only increase with the upcoming generation of tele ...
Exact exchange is a primordial ingredient in Kohn–Sham Density Functional Theory based Molecular Dynamics (MD) simulations whenever thermodynamic properties, kinetics, barrier heights or excitation energies have to be predicted with high accuracy. However, ...