Concept

Cray-1

The Cray-1 was a supercomputer designed, manufactured and marketed by Cray Research. Announced in 1975, the first Cray-1 system was installed at Los Alamos National Laboratory in 1976. Eventually, eighty Cray-1s were sold, making it one of the most successful supercomputers in history. It is perhaps best known for its unique shape, a relatively small C-shaped cabinet with a ring of benches around the outside covering the power supplies and the cooling system. The Cray-1 was the first supercomputer to successfully implement the vector processor design. These systems improve the performance of math operations by arranging memory and registers to quickly perform a single operation on a large set of data. Previous systems like the CDC STAR-100 and ASC had implemented these concepts but did so in a way that seriously limited their performance. The Cray-1 addressed these problems and produced a machine that ran several times faster than any similar design. The Cray-1's architect was Seymour Cray; the chief engineer was Cray Research co-founder Lester Davis. They would go on to design several new machines using the same basic concepts, and retained the performance crown into the 1990s. From 1968 to 1972, Seymour Cray of Control Data Corporation (CDC) worked on the CDC 8600, the successor to his earlier CDC 6600 and CDC 7600 designs. The 8600 was essentially made up of four 7600s in a box with an additional special mode that allowed them to operate lock-step in a SIMD fashion. Jim Thornton, formerly Cray's engineering partner on earlier designs, had started a more radical project known as the CDC STAR-100. Unlike the 8600's brute-force approach to performance, the STAR took an entirely different route. The main processor of the STAR had lower performance than the 7600, but added hardware and instructions to speed up particularly common supercomputer tasks. By 1972, the 8600 had reached a dead end; the machine was so incredibly complex that it was impossible to get one working properly.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (3)
CS-423: Distributed information systems
This course introduces the foundations of information retrieval, data mining and knowledge bases, which constitute the foundations of today's Web-based distributed information systems.
CS-471: Advanced multiprocessor architecture
Multiprocessors are basic building blocks for all computer systems. This course covers the architecture and organization of modern multiprocessors, prevalent accelerators (e.g., GPU, TPU), and datacen
CS-307: Introduction to multiprocessor architecture
Multiprocessors are a core component in all types of computing infrastructure, from phones to datacenters. This course will build on the prerequisites of processor design and concurrency to introduce
Related lectures (10)
Distributed Information Retrieval
Explores centralized and distributed information retrieval, including Fagin's Algorithm for efficient document identification.
Data-Parallel Programming: Vector & SIMD Processors
Explores data-parallel programming with vector processors and SIMD, and introduces MapReduce, Pregel, and TensorFlow.
Recommender Systems: Matrix Factorization & Evaluation
Explores matrix factorization techniques for recommender systems, including evaluation metrics like RMSE and NDCG.
Show more
Related publications (17)

A 500 x 500 Dual-Gate SPAD Imager With 100% Temporal Aperture and 1 ns Minimum Gate Length for FLIM and Phasor Imaging Applications

Edoardo Charbon, Claudio Bruschini, Andrei Ardelean, Paul Mos, Arin Can Ülkü, Michael Alan Wayne

In this article, we report on SwissSPAD3 (SS3), a 500 x 500 pixel single-photon avalanche diode (SPAD) array, fabricated in 0.18-mu m CMOS technology. In this sensor, we introduce a novel dual-gate architecture with two contiguous temporal windows, or gate ...
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC2022

PaRiS: Causally Consistent Transactions with Non-blocking Reads and Partial Replication

Willy Zwaenepoel, Diego Didona, Kristina Spirovska

Geo-replicated data platforms are at the backbone of several large-scale online services. Transactional Causal Consistency (TCC) is an attractive consistency level for building such platforms. TCC avoids many anomalies of eventual consistency, eschews the ...
2019

PaRiS: Causally Consistent Transactions with Non-blocking Reads and Partial Replication

Willy Zwaenepoel, Diego Didona, Kristina Spirovska

Geo-replicated data platforms are the backbone of several large-scale online services. Transactional Causal Consistency (TCC) is an attractive consistency level for building such platforms. TCC avoids many anomalies of eventual consistency, eschews the syn ...
IEEE COMPUTER SOC2019
Show more
Related concepts (16)
Cray
Cray Inc., a subsidiary of Hewlett Packard Enterprise, is an American supercomputer manufacturer headquartered in Seattle, Washington. It also manufactures systems for data storage and analytics. Several Cray supercomputer systems are listed in the TOP500, which ranks the most powerful supercomputers in the world. Cray manufactures its products in part in Chippewa Falls, Wisconsin, where its founder, Seymour Cray, was born and raised.
Parallel computing
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling.
Vector processor
In computing, a vector processor or array processor is a central processing unit (CPU) that implements an instruction set where its instructions are designed to operate efficiently and effectively on large one-dimensional arrays of data called vectors. This is in contrast to scalar processors, whose instructions operate on single data items only, and in contrast to some of those same scalar processors having additional single instruction, multiple data (SIMD) or SWAR Arithmetic Units.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.