**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Concept# Unité arithmétique et logique

Résumé

L'unité arithmétique et logique (UAL, en anglais arithmetic–logic unit, ALU), est l'organe de l'ordinateur chargé d'effectuer les calculs. Le plus souvent, l'UAL est incluse dans l'unité centrale de traitement ou le microprocesseur. Elle est constituée d'un circuit à portes logiques.
Différents types d'UAL
Les UAL peuvent être spécialisées ou pas. Les UAL élémentaires calculent sur des nombres entiers, et peuvent effectuer les opérations communes, que l'on peut séparer en quatre groupes :
# Les opérations arithmétiques : addition, soustraction, changement de signe, etc.

# les opérations logiques bit à bit : compléments à un, à deux, et, ou, ou exclusif, non, non-et, etc.

# les comparaisons : test d'égalité, supérieur, inférieur, et leur équivalents « ou égal ».

# éventuellement des décalages et rotations (mais parfois ces opérations sont externalisées).

Certaines UAL sont spécialisées dans la manipulation des nombres à virgule flottante, en simple ou double précision (on

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Publications associées

Chargement

Personnes associées

Chargement

Unités associées

Chargement

Concepts associés

Chargement

Cours associés

Chargement

Séances de cours associées

Chargement

Publications associées (56)

Chargement

Chargement

Chargement

Personnes associées (11)

Concepts associés (50)

Central processing unit

A central processing unit (CPU)—also called a central processor or main processor—is the most important processor in a given computer. Its electronic circuitry executes instructions of a computer pr

Circuit intégré

Le circuit intégré (CI), aussi appelé puce électronique, est un composant électronique, basé sur un semi-conducteur, reproduisant une ou plusieurs fonctions électroniques plus ou moins complexes, in

Porte logique

vignette|Composants TTL
Une porte logique (gate) est un circuit électronique réalisant des opérations logiques (booléennes) sur une séquence de bits. Cette séquence est donnée par un signal d'entrée m

Cours associés (31)

EE-110: Logic systems (for MT)

Ce cours couvre les fondements des systèmes numériques. Sur la base d'algèbre Booléenne et de circuitscombinatoires et séquentiels incluant les machines d'états finis, les methodes d'analyse et de synthèse de systèmelogiques sont étudiées et appliquée

EE-208: Microcontrollers and digital systems

Microcontrôleurs et conception de systèmes numériques couvre le fonctionnement interne d'un microcontrôleur, des notions de base d'architecture de processeur et de système informatique ainsi que les interfaces de microcontrôleurs, et protocoles de communication série.

EE-334: Digital systems design

Students will acquire basic knowledge about methodologies and tools for the design, optimization, and verification of custom digital systems/hardware.
They learn how to design synchronous digital circuits on register transfer level, analyse their timing and implement them in VHDL and on FPGAs.

Unités associées (7)

For the last thirty years, electronics, at first built with discrete components, and then as Integrated Circuits (IC), have brought diverse and lasting improvements to our quality of life. Examples might include digital calculators, automotive and airplane control assistance, almost all electrical household appliances, and the almost ubiquitous Personal Computer. Application-Specific Integrated Circuits (ASICs) were traditionally used for their high performance and low manufacturing cost, and were designed specifically for a single application with large volumes. But as lower product lifetimes and the pressures of fast marketing increased, ASICs' high design cost pushed for their replacement by Microprocessors. These processors, capable of implementing any functionality through a change in software, are thus often called General Purpose Processors. General purpose processors are used for everyday computing tasks, and found in all personal computers. They are also often used as building blocks for scientific supercomputers. Superscalar processors such as these require ever more processing power to run complex simulations, video games or versatile telecoms services. In the case of embedded applications, e.g. for portable devices, both performance and power consumption must be taken into account. In a bid to adapt a processor to some extent to select applications, fully reconfigurable logic can greatly improve the performance of a processor, since it is shaped for the best possible execution with the available resources. However, as reconfigurable logic is far slower than custom logic, this gain is possible only for some specific applications with large parallelism, after a detailed study of the algorithm. Even though this process can be automated, it still requires large computing resources, and cannot be performed at run time. To reduce the loss in speed compared to custom logic, it is possible to limit the reconfigurability to increase the breadth of applications where performance can be improved. However, as the application space increases, a careful analysis and design of the reconfigurability is required to minimize the speed loss, notably when dynamic reconfiguration is considered. As a case study, we analyze the feasibility of adding limited reconfigurability to the Floating Point Units (FPUs) of a general purpose processor. These rather large units execute all floating point operations, and may also be used for integer multiplication. If an application contains few or infrequent instructions that must be executed by the FPU, this idle hardware only increases power consumption without enhancing performance. This is often the case in non-scientific applications and even many recent and detailed video games which make heavy use of hardware display accelerators for 3D graphics. In a fast multiplier such as can be found in the FPU of a high performance processor, the logic to perform multiplication is a large tree of compressors to add all the partial products together. It is possible to add logic to allow the reconfiguration of part of this tree as several extra Arithmetic and Logic Units (ALU). This requires a detailed timing analysis for both the reconfigurable FPU and the extra ALUs, taking into account effects such as added wires and longer critical paths. Finally, the algorithm to decide when and how to reconfigure must be studied, in terms of eciency and complexity. The results of adding this limited reconfigurability to a mainstream superscalar processor over a large set of compute intensive benchmarks show gains of up to 56% in the best case, with an average gain of 11%. The application to an idealized huge top processor still shows slightly positive average gains, as the limits of available parallelism are reached, bounded by both the application and many of the characteristics of the processor. In all cases, binary compatibility is maintained, allowing the re-use of all existing software. We show that adding limited reconfigurability to a general purpose superscalar processor can produce interesting gains over a wide range of applications while maintaining binary compatibility, and without large modifications to the original design. Limited reconfigurability is worthwhile as it increases the design space, allowing gains to apply to a larger set of applications. These gains are achieved through careful study and optimization of the reconfigurable logic and the decision algorithm.

Séances de cours associées (56)

Abelian varieties are fascinating objects, combining the fields of geometry and arithmetic. While the interest in abelian varieties has long time been of purely theoretic nature, they saw their first real-world application in cryptography in the mid 1980's, and have ever since lead to broad research on the computational and the arithmetic side. The most instructive examples of abelian varieties are elliptic curves and Jacobian varieties of hyperelliptic curves, and they come naturally equipped with some additional structure, called a principal polarization. Morphisms between abelian varieties that respect both the geometric and the arithmetic structure are called isogenies. In this thesis we focus on the computation of isogenies with cyclic kernel between principally polarized abelian varieties over finite fields.

The efficient synthesis of circuits is a well-studied problem. Due to the NP-hardness of the problem, no optimal algorithm has been presented so far. However, the heuristics presented by several researchers in the past, which are also adopted by commercially available tools, are able to generate near-optimal design implementation for most circuits. Apart from very few exceptions, these heuristics exploit the rules of Boolean algebra, involving the logical operations OR and AND, in order to transform the circuit. The approach works well for common logic circuits where OR and AND gates constitute the major portion of the circuitry. On the other hand, on arithmetic circuits including a large proportion of XOR gates in addition to the other two basic types, these heuristics perform poorly. For arithmetic circuits, current logic synthesis tools generate design implementations which are far from optimal. This is the case even for very common arithmetic circuits such as adders and multipliers. Current synthesis tools are unable to convert a Ripple Carry Adder (RCA) into a Carry Look-Ahead Adder (CLA) when synthesized for delay. For this reason, designers still rely on manually explored designs for arithmetic circuits. In this work, we explore the challenges in the efficient synthesis of arithmetic circuits and present a set of efficient algorithms to overcome these challenges. The presented algorithms vary in their computational complexity, and also in the granularity of the circuit details at which they work. We also present a methodology to combine these algorithms so that they can be applied on larger circuits without losing performance. The developed method, to which we refer as pre-synthesis circuit restructuring, starts with an elementary description of the input circuit and generates a quasi-optimal implementation of the circuit. The presented algorithms have been tested on a wide variety of circuits, including both arithmetic and nonarithmetic circuits. In nonarithmetic circuits, the generated design implementations have performance comparable to those generated by state-of-the-art techniques. However, for arithmetic and composite (mixture of arithmetic and nonarithmetic) circuits, our algorithms generate significantly better implementations. Contrary to currently used synthesis tools, our algorithms are able to convert a Ripple Carry Adder into a Carry Look-Ahead Adder without using any information about the functionality of the input circuit. For some circuits, such as multipliers and multi-input comparators, our algorithms are able to generate completely new design implementations, which have not yet been explored manually. Since our algorithm is able to generate a meaningful (i.e., near optimal) architectural implementation of a circuit component without any prior knowledge about its functionality, it eliminates the need of library implementation of arithmetic circuits. Although the algorithms presented here are not specific to arithmetic circuits and can be applied to any circuit, due to higher complexity compared to other logic synthesis heuristics, the use of our method is recommended for arithmetic circuits only. In addition to experimental evidence, the effectiveness of our algorithm on arithmetic circuits is also proved theoretically.