Parallelizing an algorithm consists of dividing the computation into a set of sequential operations, assigning the operations to threads, synchronizing the execution of threads, specifying the data transfer requirements between threads and mapping the threads onto processors. With current software technology, writing a parallel program executing the parallelized algorithm involves mixing sequential code with calls to a communication library such as PVM, both for communication and synchronization. The authors introduce CAP (Computer-Aided Parallelization), a language extension to C++ from which C++/PVM programs are automatically generated. CAP allows to specify (1) the threads in a parallel program, (2) the messages exchanged between threads, and (3) the ordering of sequential operations required to complete a parallel task. All CAP operations (sequential and parallel) have a single input and a single output, and no shared variables. CAP separates completely the computation description from the communication and synchronization specification. From the CAP specification, a MPMD (multiple program multiple data) program is generated that executes on the various processing elements of the parallel machine. They illustrate the features of the CAP parallel programming extension to C++. They demonstrate the expressive power of CAP and the performance of CAP-specified applications
Aurélien François Gilbert Bloch
Paolo Ienne, Kubilay Atasu, Radu Ioan Stoica, Jovan Blanusa