In computer operating systems, memory paging (or swapping on some Unix-like systems) is a memory management scheme by which a computer stores and retrieves data from secondary storage for use in main memory. In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. Paging is an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physical memory.
For simplicity, main memory is called "RAM" (an acronym of random-access memory) and secondary storage is called "disk" (a shorthand for hard disk drive, drum memory or solid-state drive, etc.), but as with many aspects of computing, the concepts are independent of the technology used.
Depending on the memory model, paged memory functionality is usually hardwired into a CPU/MCU by using a Memory Management Unit (MMU) or Memory Protection Unit (MPU) and separately enabled by privileged system code in the operating system's kernel. In CPUs implementing the x86 instruction set architecture (ISA) for instance, the memory paging is enabled via the CR0 control register.
In the 1960s, swapping was an early virtual memory technique. An entire program or entire segment would be "swapped out" (or "rolled out") from RAM to disk or drum, and another one would be swapped in (or rolled in). A swapped-out program would be current but its execution would be suspended while its RAM was in use by another program; a program with a swapped-out segment could continue running until it needed that segment, at which point it would be suspended until the segment was swapped in.
A program might include multiple overlays that occupy the same memory at different times. Overlays are not a method of paging RAM to disk but merely of minimizing the program's RAM use. Subsequent architectures used memory segmentation, and individual program segments became the units exchanged between disk and RAM. A segment was the program's entire code segment or data segment, or sometimes other large data structures.
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Une unité de gestion mémoire (MMU pour memory management unit), parfois appelée unité de gestion de mémoire paginée (PMMU pour paged memory management unit), est un composant permettant de contrôler les accès qu'un processeur fait à la mémoire de l'ordinateur dans lequel il est placé. À l'époque des premiers microprocesseurs, il s'agissait d'un circuit intégré dédié. Puis le MMU a été intégré aux microprocesseurs, à partir du 80286 pour la gamme Intel x86, à partir du 68030 pour la gamme Motorola 680x0.
Un noyau de système d’exploitation, ou simplement noyau, ou kernel en anglais, est une des parties fondamentales de certains systèmes d’exploitation. Il gère les ressources de l’ordinateur et permet aux différents composants — matériels et logiciels — de communiquer entre eux. En tant que partie du système d’exploitation, le noyau fournit des mécanismes d’abstraction du matériel, notamment de la mémoire, du (ou des) processeur(s), et des échanges d’informations entre logiciels et périphériques matériels.
thumb|Schéma de principe de la mémoire virtuelle. En informatique, le mécanisme de mémoire virtuelle a été mis au point dans les années 1960. Il repose sur l'utilisation de traduction à la volée des adresses (virtuelles) vues du logiciel, en adresses physiques de mémoire vive. La mémoire virtuelle permet : d'utiliser de la mémoire de masse comme extension de la mémoire vive ; d'augmenter le taux de multiprogrammation ; de mettre en place des mécanismes de protection de la mémoire ; de partager la mémoire entre processus.
The student will learn state-of-the-art algorithms for solving differential equations. The analysis and implementation of these algorithms will be discussed in some detail.
Driven by the demand for real-time processing and the need to minimize latency in AI algorithms, edge computing has experienced remarkable progress. Decision-making AI applications stand out for their heavy reliance on data-centric operations, predominantl ...
Computer systems rely heavily on abstraction to manage the exponential growth of complexity across hardware and software. Due to practical considerations of compatibility between components of these complex systems across generations, developers have favou ...
In the past decades, a significant increase of the transistor density on a chip has led to exponential growth in computational power driven by Moore's law. To overcome the bottleneck of traditional von-Neumann architecture in computational efficiency, effo ...