The term supercomputing arose in the late 1920s in the United States in response to the IBM tabulators at Columbia University. The CDC 6600, released in 1964, is sometimes considered the first supercomputer. However, some earlier computers were considered supercomputers for their day such as the 1954 IBM NORC in the 1950s, and in the early 1960s, the UNIVAC LARC (1960), the IBM 7030 Stretch (1962), and the Manchester Atlas (1962), all of which were of comparable power.
While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records.
By the end of the 20th century, massively parallel supercomputers with thousands of "off-the-shelf" processors similar to those found in personal computers were constructed and broke through the teraflop computational barrier.
Progress in the first decade of the 21st century was dramatic and supercomputers with over 60,000 processors appeared, reaching petaflop performance levels.
The term "Super Computing" was first used in the New York World in 1929 to refer to large custom-built tabulators that IBM had made for Columbia University.
In 1957, a group of engineers left Sperry Corporation to form Control Data Corporation (CDC) in Minneapolis, Minnesota. Seymour Cray left Sperry a year later to join his colleagues at CDC. In 1960, Cray completed the CDC 1604, one of the first generation of commercially successful transistorized computers and at the time of its release, the fastest computer in the world. However, the sole fully transistorized Harwell CADET was operational in 1951, and IBM delivered its commercially successful transistorized IBM 7090 in 1959.
Around 1960, Cray decided to design a computer that would be the fastest in the world by a large margin. After four years of experimentation along with Jim Thornton, and Dean Roush and about 30 other engineers, Cray completed the CDC 6600 in 1964.