An AI accelerator is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. , a typical AI integrated circuit chip contains billions of MOSFET transistors.
A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design.
Computer systems have frequently complemented the CPU with special-purpose accelerators for specialized tasks, known as coprocessors. Notable application-specific hardware units include video cards for graphics, sound cards, graphics processing units and digital signal processors. As deep learning and artificial intelligence workloads rose in prominence in the 2010s, specialized hardware units were developed or adapted from existing products to accelerate these tasks. Benchmarks such as MLPerf may be used to evaluate the performance of AI accelerators.
First attempts like Intel's ETANN 80170NX incorporated analog circuits to compute neural functions. Later all-digital chips like the Nestor/Intel Ni1000 followed. As early as 1993, digital signal processors were used as neural network accelerators to accelerate optical character recognition software.
Already in 1988, Wei Zhang et al. discussed fast optical implementations of convolutional neural networks for alphabet recognition.
In the 1990s, there were also attempts to create parallel high-throughput systems for workstations aimed at various applications, including neural network simulations. FPGA-based accelerators were also first explored in the 1990s for both inference and training. Smartphones began incorporating AI accelerators starting with the Qualcomm Snapdragon 820 in 2015.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Multiprocessors are a core component in all types of computing infrastructure, from phones to datacenters. This course will build on the prerequisites of processor design and concurrency to introduce
Multiprocessors are now the defacto building blocks for all computer systems. This course will build upon the basic concepts offered in Computer Architecture I to cover the architecture and organizati
This thesis delves into the critical study of particle transport in matter, particularly emphasisingits implications for machine protection at CERN's accelerator complex and facilities. To conductstudies of this nature, FLUKA and Geant4 stand out within a ...
The increasing complexity of transformer models in artificial intelligence expands their computational costs, memory usage, and energy consumption. Hardware acceleration tackles the ensuing challenges by designing processors and accelerators tailored for t ...
2024
,
Modern data management systems aim to provide both cutting-edge functionality and hardware efficiency. With the advent of AI-driven data processing and the post-Moore Law era, traditional memory-bound scale-up data management operations face scalability ch ...