In compiler construction, a basic block is a straight-line code sequence with no branches in except to the entry and no branches out except at the exit. This restricted form makes a basic block highly amenable to analysis. Compilers usually decompose programs into their basic blocks as a first step in the analysis process. Basic blocks form the vertices or nodes in a control-flow graph.
The code in a basic block has:
One entry point, meaning that no code within it is the destination of a jump instruction anywhere in the program.
One exit point, meaning that only the last instruction can cause the program to begin executing code in a different basic block.
Under these circumstances, whenever the first instruction in a basic block is executed, the rest of the instructions are necessarily executed exactly once and in order.
The code may be source code, assembly code, or some other sequence of instructions.
More formally, a sequence of instructions forms a basic block if:
The instruction in each position dominates (always executes before) all those in later positions.
No other instruction executes between two instructions in the sequence.
This definition is more general than the intuitive one in some ways. For example, it allows unconditional jumps to labels not targeted by other jumps. This definition embodies the properties that make basic blocks easy to work with when constructing an algorithm.
The blocks to which control may transfer after reaching the end of a block are called that block's successors, while the blocks from which control may have come when entering a block are called that block's predecessors. The start of a basic block may be jumped to from more than one location.
The algorithm for generating basic blocks from a listing of code is simple: the analyser scans over the code, marking block boundaries, which are instructions that may either begin or end a block because they either transfer control or accept control from another point. Then, the listing is simply "cut" at each of these points, and basic blocks remain.
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Data-flow analysis is a technique for gathering information about the possible set of values calculated at various points in a computer program. A program's control-flow graph (CFG) is used to determine those parts of a program to which a particular value assigned to a variable might propagate. The information gathered is often used by compilers when optimizing a program. A canonical example of a data-flow analysis is reaching definitions.
In computing, an optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program. Common requirements are to minimize a program's execution time, memory footprint, storage size, and power consumption (the last three being popular for portable computers). Compiler optimization is generally implemented using a sequence of optimizing transformations, algorithms which take a program and transform it to produce a semantically equivalent output program that uses fewer resources or executes faster.
Dans le contexte des langages de programmation fonctionnels et impératifs, un système de gestion d'exceptions ou SGE permet de gérer les conditions exceptionnelles pendant l'exécution du programme. Lorsqu'une exception se produit, l'exécution normale du programme est interrompue et l'exception est traitée. Les erreurs/exceptions les plus courantes sont probablement l'accès non autorisé à une zone mémoire (erreur de manipulation de pointeur) et la division par zéro (on ne prévoit pas le cas où le diviseur est nul).
Explore les techniques pour accélérer les analyses de flux de données et discute de l'importance de l'ordonnancement des nœuds et de la traversée post-commande.
Explore l'analyse de flux de données pour l'optimisation, y compris la résolution d'équations, les variables en direct, l'atteinte de définitions et les expressions très occupées.
Formally verifying the correctness of software is necessary to merit the trust people put in software systems. Currently, formal verification requires human effort to prove that a piece of code matches its specification and code changes to improve verifiab ...
The performance monitoring of computer systems is a complex affair, made even more challenging by the increasing gap between hardware and software. Methods that collect and feed data to performance analysis can usually be classified into one of two groups. ...
EPFL2017
Instruction-supplymechanisms, namely the branch predictors and instruction prefetchers, exploit recurring control flow in an application to predict the applicationâs future control flow and provide the core with a useful instruction stream to execute in ...