Résumé
Chapel, the Cascade High Productivity Language, is a parallel programming language that was developed by Cray, and later by Hewlett Packard Enterprise which acquired Cray. It was being developed as part of the Cray Cascade project, a participant in DARPA's High Productivity Computing Systems (HPCS) program, which had the goal of increasing supercomputer productivity by 2010. It is being developed as an open source project, under version 2 of the Apache license. The Chapel compiler is written in C and C++ (C++14). The backend (i.e. the optimizer) is LLVM, written in C++. Python 3.7 or newer is required for some optional components such Chapel’s test system and c2chapel, a tool to generate C bindings for Chapel. By default Chapel compiles to binary executables, but it can also compile to C code, and then LLVM is not used. Chapel code can be compiled to libraries to be callable from C, or Fortran or e.g. Python also supported. Chapel aims to improve the programmability of parallel computers in general and the Cascade system in particular, by providing a higher level of expression than current programming languages do and by improving the separation between algorithmic expression and data structure implementation details. The language designers aspire for Chapel to bridge the gap between current high-performance computing (HPC) programming practitioners, who they describe as Fortran, C or C++ users writing procedural code using technologies like OpenMP and MPI on one side, and newly graduating computer programmers who tend to prefer Java, Python or Matlab with only some of them having experience with C++ or C. Chapel should offer the productivity advances offered by the latter suite of languages while not alienating the users of the first. Chapel supports a multithreaded parallel programming model at a high level by supporting abstractions for data parallelism, task parallelism, and nested parallelism. It enables optimizations for the locality of data and computation in the program via abstractions for data distribution and data-driven placement of subcomputations.
À propos de ce résultat
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.