Concept# Edsger W. Dijkstra

Summary

Edsger Wybe Dijkstra (ˈdaɪkstrə ; ˈɛtsxər ˈʋibə ˈdɛikstra; 11 May 1930 – 6 August 2002) was a Dutch computer scientist, programmer, software engineer, and science essayist.
Born in Rotterdam, the Netherlands, Dijkstra studied mathematics and physics and then theoretical physics at the University of Leiden. Adriaan van Wijngaarden offered him a job as the first computer programmer in the Netherlands at the Mathematical Center in Amsterdam, where he worked from 1952 until 1962. He formulated and solved the shortest path problem in 1956, and in 1960 developed the first compiler for the programming language ALGOL 60 in conjunction with colleague nl. In 1962 he moved to Eindhoven, and later to Nuenen, where he became a professor in the Mathematics Department at the Technische Hogeschool Eindhoven. In the late 1960s he built the THE multiprogramming system, which influenced the designs of subsequent systems through its use of software-based paged virtual memory. Dijkstra joined Burroughs Co

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related publications (3)

Related units

Related people

No results

No results

Loading

Loading

Loading

Related courses (6)

COM-503: Performance evaluation

In this course you will learn the methods and techniques that are used to perform a good performance evaluation during a research or development project.

MATH-265: Introduction to optimization and operations research

Introduction to major operations research models and optimization algorithms

MATH-261: Discrete optimization

This course is an introduction to linear and discrete optimization.
Warning: This is a mathematics course! While much of the course will be algorithmic in nature, you will still need to be able to prove theorems.

Related concepts (28)

Programming language

A programming language is a system of notation for writing computer programs. Most programming languages are text-based formal languages, but they may also be graphical. They are a kind of computer

Compiler

In computing, a compiler is a computer program that translates computer code written in one programming language (the source language) into another language (the target language). The name "compiler"

Computer science

Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied

Hagit Albo Attiya, Rachid Guerraoui

Building correct and efﬁcient concurrent algorithms is known to be a difﬁcult problem of fundamental importance. To achieve ef- ﬁciency, designers try to remove unnecessary and costly synchro- nization. However, not only is this manual trial-and-error process ad-hoc, time consuming and error-prone, but it often leaves design- ers pondering the question of: is it inherently impossible to elimi- nate certain synchronization, or is it that I was unable to eliminate it on this attempt and I should keep trying? In this paper we respond to this question. We prove that it is im- possible to build concurrent implementations of classic and ubiqui- tous speciﬁcations such as sets, queues, stacks, mutual exclusion and read-modify-write operations, that completely eliminate the use of expensive synchronization. We prove that one cannot avoid the use of either: i) read-after- write (RAW), where a write to shared variable A is followed by a read to a different shared variable B without a write to B in between, or ii) atomic write-after-read (AWAR), where an atomic operation reads and then writes to shared locations. Unfortunately, enforcing RAW or AWAR is expensive on all current mainstream processors. To enforce RAW, memory ordering–also called fence or barrier– instructions must be used. To enforce AWAR, atomic instructions such as compare-and-swap are required. However, these instruc- tions are typically substantially slower than regular instructions. Although algorithm designers frequently struggle to avoid RAW and AWAR, their attempts are often futile. Our result characterizes the cases where avoiding RAW and AWAR is impossible. On the ﬂip side, our result can be used to guide designers towards new algorithms where RAW and AWAR can be eliminated.

2011Related lectures (6)

Hagit Albo Attiya, Rachid Guerraoui

Building correct and efficient concurrent algorithms is known to be a difficult problem of fundamental importance. To achieve efficiency, designers try to remove unnecessary and costly synchronization. However, not only is this manual trial-and-error process ad-hoc, time consuming and error-prone, but it often leaves designers pondering the question of: is it inherently impossible to eliminate certain synchronization, or is it that I was unable to eliminate it on this attempt and I should keep trying? In this paper we respond to this question. We prove that it is impossible to build concurrent implementations of classic and ubiquitous specifications such as sets, queues, stacks, mutual exclusion and read-modify-write operations, that completely eliminate the use of expensive synchronization. We prove that one cannot avoid the use of either: i) read-after-write (RAW), where a write to shared variable A is followed by a read to a different shared variable B without a write to B in between, or ii) atomic write-after-read (AWAR), where an atomic operation reads and then writes to shared locations. Unfortunately, enforcing RAW or AWAR is expensive on all current mainstream processors. To enforce RAW, memory ordering--also called fence or barrier--instructions must be used. To enforce AWAR, atomic instructions such as compare-and-swap are required. However, these instructions are typically substantially slower than regular instructions. Although algorithm designers frequently struggle to avoid RAW and AWAR, their attempts are often futile. Our result characterizes the cases where avoiding RAW and AWAR is impossible. On the flip side, our result can be used to guide designers towards new algorithms where RAW and AWAR can be eliminated.

2011Self-stabilization ensures that, after any transient fault, the system recovers in a finite time and eventually exhibits correct behavior. Speculation consists in guaranteeing that the system satisfies its requirements for any execution but exhibits significantly better performances for a subset of executions that are more probable. A speculative protocol is in this sense supposed to be both robust and efficient in practice. We introduce the notion of speculative stabilization which we illustrate through the mutual exclusion problem. We then present a novel speculatively stabilizing mutual exclusion protocol. Our protocol is self-stabilizing for any asynchronous execution. We prove that its stabilization time for synchronous executions is [diam(g)/2] steps (where diam(g) denotes the diameter of the system). This complexity result is of independent interest. The celebrated mutual exclusion protocol of Dijkstra stabilizes in n steps (where n is the number of processes) in synchronous executions and the question whether the stabilization time could be strictly smaller than the diameter has been open since then (almost 40 years). We show that this is indeed possible for any underlying topology. We also provide a lower bound proof that shows that our new stabilization time of [diam(g)/2] steps is optimal for synchronous executions, even if asynchronous stabilization is not required. Copyright 2013 ACM.

2013