X10 is a programming language being developed by IBM at the Thomas J. Watson Research Center as part of the Productive, Easy-to-use, Reliable Computing System (PERCS) project funded by DARPA's High Productivity Computing Systems (HPCS) program.
Its primary authors are Kemal Ebcioğlu, Saravanan Arumugam (Aswath), Vijay Saraswat, and Vivek Sarkar.
X10 is designed specifically for parallel computing using the partitioned global address space (PGAS) model.
A computation is divided among a set of places, each of which holds some data and hosts one or more activities that operate on those data. It has a constrained type system for object-oriented programming, a form of dependent types. Other features include user-defined primitive struct types; globally distributed arrays, and structured and unstructured parallelism.
X10 uses the concept of parent and child relationships for activities to prevent the lock stalemate that can occur when two or more processes wait for each other to finish before they can complete. An activity may spawn one or more child activities, which may themselves have children. Children cannot wait for a parent to finish, but a parent can wait for a child using the finish command.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Chapel, the Cascade High Productivity Language, is a parallel programming language that was developed by Cray, and later by Hewlett Packard Enterprise which acquired Cray. It was being developed as part of the Cray Cascade project, a participant in DARPA's High Productivity Computing Systems (HPCS) program, which had the goal of increasing supercomputer productivity by 2010. It is being developed as an open source project, under version 2 of the Apache license. The Chapel compiler is written in C and C++ (C++14).
In computer science, partitioned global address space (PGAS) is a parallel programming model paradigm. PGAS is typified by communication operations involving a global memory address space abstraction that is logically partitioned, where a portion is local to each process, thread, or processing element. The novelty of PGAS is that the portions of the shared memory space may have an affinity for a particular process, thereby exploiting locality of reference in order to improve performance.
In computing, a parallel programming model is an abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs. The value of a programming model can be judged on its generality: how well a range of different problems can be expressed for a variety of different architectures, and its performance: how efficiently the compiled programs can execute. The implementation of a parallel programming model can take the form of a library invoked from a sequential language, as an extension to an existing language, or as an entirely new language.
The Simple Object-Oriented Concurrent Programming (SCOOP) model proposed by Bertrand Meyer and illustrated through the Eiffel programming language is a simple yet powerful model for concurrent programming. In this paper, we analyze the applicability of the ...
Computer simulations of experimentally comparable system sizes in soft matter often require considerable elapsed times. The use of many cores can reduce the needed time, ideally proportionally to the number of processors. In this paper a parallel computati ...
The growing use of multicore and networked computing systems is increasing the importance of developing reliable parallel and distributed code. Unfortunately, developing and testing such code is notoriously hard, especially for shared-memory models of prog ...