Larrabee is the codename for a cancelled GPGPU chip that Intel was developing separately from its current line of integrated graphics accelerators. It is named after either Mount Larrabee or Larrabee State Park in Whatcom County, Washington, near the town of Bellingham. The chip was to be released in 2010 as the core of a consumer 3D graphics card, but these plans were cancelled due to delays and disappointing early performance figures. The project to produce a GPU retail product directly from the Larrabee research project was terminated in May 2010 and its technology was passed on to the Xeon Phi. The Intel MIC multiprocessor architecture announced in 2010 inherited many design elements from the Larrabee project, but does not function as a graphics processing unit; the product is intended as a co-processor for high performance computing.
Almost a decade later, on June 12, 2018; the idea of an Intel dedicated GPU was revived again with Intel's desire to create a discrete GPU by 2020. This project would eventually become the Intel Xe and Intel Arc series, released in September 2020 and March 2022, respectively - but both were unconnected to the work on the Larrabee project.
On December 4, 2009, Intel officially announced that the first-generation Larrabee would not be released as a consumer GPU product. Instead, it was to be released as a development platform for graphics and high-performance computing. The official reason for the strategic reset was attributed to delays in hardware and software development. On May 25, 2010, the Technology@Intel blog announced that Larrabee would not be released as a GPU, but instead would be released as a product for high-performance computing competing with the Nvidia Tesla.
The project to produce a GPU retail product directly from the Larrabee research project was terminated in May 2010. The Intel MIC multiprocessor architecture announced in 2010 inherited many design elements from the Larrabee project, but does not function as a graphics processing unit; the product is intended as a co-processor for high performance computing.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Multiprocessors are a core component in all types of computing infrastructure, from phones to datacenters. This course will build on the prerequisites of processor design and concurrency to introduce
Biochemistry is a key discipline for the Life Sciences. Biological Chemistry I and II are two tightly interconnected courses that aim to describe and understand in molecular terms the processes that m
Multiprocessors are now the defacto building blocks for all computer systems. This course will build upon the basic concepts offered in Computer Architecture I to cover the architecture and organizati
Xeon Phi was a series of x86 manycore processors designed and made by Intel. It was intended for use in supercomputers, servers, and high-end workstations. Its architecture allowed use of standard programming languages and application programming interfaces (APIs) such as OpenMP. Xeon Phi launched in 2010. Since it was originally based on an earlier GPU design (codenamed "Larrabee") by Intel that was cancelled in 2009, it shared application areas with GPUs.
Bonnell is a CPU microarchitecture used by Intel Atom processors which can execute up to two instructions per cycle. Like many other x86 microprocessors, it translates x86 instructions (CISC instructions) into simpler internal operations (sometimes referred to as micro-ops, effectively RISC style instructions) prior to execution. The majority of instructions produce one micro-op when translated, with around 4% of instructions used in typical programs producing multiple micro-ops.
Nvidia Tesla was the name of Nvidia's line of products targeted at stream processing or general-purpose graphics processing units (GPGPU), named after pioneering electrical engineer Nikola Tesla. Its products began using GPUs from the G80 series, and have continued to accompany the release of new chips. They are programmable using the CUDA or OpenCL APIs. The Nvidia Tesla product line competed with AMD's Radeon Instinct and Intel Xeon Phi lines of deep learning and GPU cards.
Virtual Memory (VM) is a critical programming abstraction that is widely used in various modern computing platforms. With the rise of datacenter computing and birth of planet-scale online services, the semantic and capacity requirements from memory have ev ...
EPFL2023
Today's continued increase in demand for processing power, despite the slowdown of Moore's law, has led to an increase in processor count, which has resulted in energy consumption and distribution problems. To address this, there is a growing trend toward ...
We present a massively parallel and scalable nodal discontinuous Galerkin finite element method (DGFEM) solver for the time-domain linearized acoustic wave equations. The solver is implemented using the libParanumal finite element framework with extensions ...