In computer architecture, Gustafson's law (or Gustafson–Barsis's law) gives the speedup in the execution time of a task that theoretically gains from parallel computing, using a hypothetical run of the task on a single-core machine as the baseline. To put it another way, it is the theoretical "slowdown" of an already parallelized task if running on a serial machine. It is named after computer scientist John L. Gustafson and his colleague Edwin H. Barsis, and was presented in the article Reevaluating Amdahl's Law in 1988. Gustafson estimated the speedup of a program gained by using parallel computing as follows: where is the theoretical speedup of the program with parallelism (scaled speedup); is the number of processors; and are the fractions of time spent executing the serial parts and the parallel parts of the program on the parallel system, where . Alternatively, can be expressed using : Gustafson's law addresses the shortcomings of Amdahl's law, which is based on the assumption of a fixed problem size, that is of an execution workload that does not change with respect to the improvement of the resources. Gustafson's law instead proposes that programmers tend to increase the size of problems to fully exploit the computing power that becomes available as the resources improve. Gustafson and his colleagues further observed from their workloads that time for the serial part typically does not grow as the problem and the system scale, that is, is fixed. This gives a linear model between the processor count and the speedup with slope , as shown in the figure above (which uses different notations: for and for ). Also, scales linearly with rather than exponentially in the Amdahl's Law. With these observations, Gustafson "expect[ed] to extend [their] success [on parallel computing] to a broader range of applications and even larger values for ". The impact of Gustafson's law was to shift research goals to select or reformulate problems so that solving a larger problem in the same amount of time would be possible.