ConeA cone is a three-dimensional geometric shape that tapers smoothly from a flat base (frequently, though not necessarily, circular) to a point called the apex or vertex. A cone is formed by a set of line segments, half-lines, or lines connecting a common point, the apex, to all of the points on a base that is in a plane that does not contain the apex. Depending on the author, the base may be restricted to be a circle, any one-dimensional quadratic form in the plane, any closed one-dimensional figure, or any of the above plus all the enclosed points.
Parallel computingParallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling.
Conical surfaceIn geometry, a (general) conical surface is the unbounded surface formed by the union of all the straight lines that pass through a fixed point — the apex or vertex — and any point of some fixed space curve — the directrix — that does not contain the apex. Each of those lines is called a generatrix of the surface. Every conic surface is ruled and developable. In general, a conical surface consists of two congruent unbounded halves joined by the apex.
Embarrassingly parallelIn parallel computing, an embarrassingly parallel workload or problem (also called embarrassingly parallelizable, perfectly parallel, delightfully parallel or pleasingly parallel) is one where little or no effort is needed to separate the problem into a number of parallel tasks. This is often the case where there is little or no dependency or need for communication between those parallel tasks, or for results between them. Thus, these are different from distributed computing problems that need communication between tasks, especially communication of intermediate results.
Massively parallelMassively parallel is the term for using a large number of computer processors (or separate computers) to simultaneously perform a set of coordinated computations in parallel. GPUs are massively parallel architecture with tens of thousands of threads. One approach is grid computing, where the processing power of many computers in distributed, diverse administrative domains is opportunistically used whenever a computer is available. An example is BOINC, a volunteer-based, opportunistic grid system, whereby the grid provides power only on a best effort basis.