Summary
In computer architecture, speedup is a number that measures the relative performance of two systems processing the same problem. More technically, it is the improvement in speed of execution of a task executed on two similar architectures with different resources. The notion of speedup was established by Amdahl's law, which was particularly focused on parallel processing. However, speedup can be used more generally to show the effect on performance after any resource enhancement. Speedup can be defined for two different types of quantities: latency and throughput. Latency of an architecture is the reciprocal of the execution speed of a task: where v is the execution speed of the task; T is the execution time of the task; W is the execution workload of the task. Throughput of an architecture is the execution rate of a task: where ρ is the execution density (e.g., the number of stages in an instruction pipeline for a pipelined architecture); A is the execution capacity (e.g., the number of processors for a parallel architecture). Latency is often measured in seconds per unit of execution workload. Throughput is often measured in units of execution workload per second. Another unit of throughput is instructions per cycle (IPC) and its reciprocal, cycles per instruction (CPI), is another unit of latency. Speedup is dimensionless and defined differently for each type of quantity so that it is a consistent metric. Speedup in latency is defined by the following formula: where Slatency is the speedup in latency of the architecture 2 with respect to the architecture 1; L1 is the latency of the architecture 1; L2 is the latency of the architecture 2. Speedup in latency can be predicted from Amdahl's law or Gustafson's law. Speedup in throughput is defined by the formula: where Sthroughput is the speedup in throughput of the architecture 2 with respect to the architecture 1; Q1 is the throughput of the architecture 1; Q2 is the throughput of the architecture 2. We are testing the effectiveness of a branch predictor on the execution of a program.
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related publications (2)
Related concepts (4)
Speedup
In computer architecture, speedup is a number that measures the relative performance of two systems processing the same problem. More technically, it is the improvement in speed of execution of a task executed on two similar architectures with different resources. The notion of speedup was established by Amdahl's law, which was particularly focused on parallel processing. However, speedup can be used more generally to show the effect on performance after any resource enhancement.
Amdahl's law
In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. It states that "the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used". It is named after computer scientist Gene Amdahl, and was presented at the American Federation of Information Processing Societies (AFIPS) Spring Joint Computer Conference in 1967.
Parallel computing
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling.
Show more