Explicitly parallel instruction computingEPIC (explicitly parallel instruction computing, littéralement informatique à instruction explicitement parallèle) est un type d'architecture de microprocesseurs, utilisé entre autres dans les DSP et par Intel pour les microprocesseurs Itanium et Itanium 2. La philosophie de l'EPIC repose sur la disparition du réordonnancement à l'exécution : les instructions sont exécutées dans l'ordre exact dans lequel le compilateur les a disposées, mais celui-ci précise les instructions à exécuter en parallèle.
Parallélisme de donnéeLe parallélisme par distribution de donnée ou parallélisme de donnée (data parallelism en anglais) est un paradigme de la programmation parallèle. Autrement dit, c'est une manière particulière d'écrire des programmes pour des machines parallèles. Les algorithmes des programmes qui entrent dans cette catégorie cherchent à distribuer les données au sein des processus et à y opérer les mêmes opérations à l'instar des SIMD. Le paradigme opposé est celui du parallélisme de tâche. Catégorie:Programmation concurr
Gather/scatter (vector addressing)Gather/scatter is a type of memory addressing that at once collects (gathers) from, or stores (scatters) data to, multiple, arbitrary indices. Examples of its use include sparse linear algebra operations, sorting algorithms, fast Fourier transforms, and some computational graph theory problems. It is the vector equivalent of register indirect addressing, with gather involving indexed reads, and scatter, indexed writes. Vector processors (and some SIMD units in CPUs) have hardware support for gather and scatter operations.
Auto-vectorisationL'auto-vectorisation est une technique de compilation de langage de programmation, permettant d'adapter automatiquement des boucles de fonctions traitant des vecteurs, ou, plus généralement, des matrices, à un processeur vectoriel ou bien un SIMD. On appelle plus généralement, le fait d'adapter des traitements à des processeurs vectoriels, de façon manuelle ou automatique, une vectorisation. Le compilateur Gnu GCC utilise des techniques d'auto-vectorisation basées en 2011 sur le framework tree-ssa pour la majorité des SIMD (3DNow!, SSE (et SSE2, SSE3), ARM NEON et l'équivalent d'ARM pour l'embarqué, MVE.
History of supercomputingThe term supercomputing arose in the late 1920s in the United States in response to the IBM tabulators at Columbia University. The CDC 6600, released in 1964, is sometimes considered the first supercomputer. However, some earlier computers were considered supercomputers for their day such as the 1954 IBM NORC in the 1950s, and in the early 1960s, the UNIVAC LARC (1960), the IBM 7030 Stretch (1962), and the Manchester Atlas (1962), all of which were of comparable power.
Supercomputer architectureApproaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s. Early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance. However, in time the demand for increased computational power ushered in the age of massively parallel systems.
Calcul intensifLe terme Calcul intensif (ou Calcul Haute Performance) - en anglais : High-performance computing (HPC) - désigne les activités de calculs réalisés sur un supercalculateur, en particulier à des fins de simulation numérique et de pré-apprentissage d'intelligences artificielles. Le calcul intensif rassemble l'administration système (réseau et sécurité) et la programmation parallèle en un champ multidisciplinaire qui combine l'électronique numérique, le développement d'architectures informatiques, la programmation système, les langages informatiques, l'algorithmique et les techniques de calcul.
MinisupercomputerMinisupercomputers constituted a short-lived class of computers that emerged in the mid-1980s, characterized by the combination of vector processing and small-scale multiprocessing. As scientific computing using vector processors became more popular, the need for lower-cost systems that might be used at the departmental level instead of the corporate level created an opportunity for new computer vendors to enter the market. As a generalization, the price targets for these smaller computers were one-tenth of the larger supercomputers.
Tesla (microarchitecture)Tesla is the codename for a GPU microarchitecture developed by Nvidia, and released in 2006, as the successor to Curie microarchitecture. It was named after the pioneering electrical engineer Nikola Tesla. As Nvidia's first microarchitecture to implement unified shaders, it was used with GeForce 8 Series, GeForce 9 Series, GeForce 100 Series, GeForce 200 Series, and GeForce 300 Series of GPUs collectively manufactured in 90 nm, 80 nm, 65 nm, 55 nm, and 40 nm.
Embarrassingly parallelIn parallel computing, an embarrassingly parallel workload or problem (also called embarrassingly parallelizable, perfectly parallel, delightfully parallel or pleasingly parallel) is one where little or no effort is needed to separate the problem into a number of parallel tasks. This is often the case where there is little or no dependency or need for communication between those parallel tasks, or for results between them. Thus, these are different from distributed computing problems that need communication between tasks, especially communication of intermediate results.