In computing, a context switch is the process of storing the state of a process or thread, so that it can be restored and resume execution at a later point, and then restoring a different, previously saved, state. This allows multiple processes to share a single central processing unit (CPU), and is an essential feature of a multiprogramming or multitasking operating system. In a traditional CPU, each process - a program in execution - utilizes the various CPU registers to store data and hold the current state of the running process. However, in a multitasking operating system, the operating system switches between processes or threads to allow the execution of multiple processes simultaneously. For every switch, the operating system must save the state of the currently running process, followed by loading the next process state, which will run on the CPU. This sequence of operations that stores the state of the running process and the loading of the following running process is called a context switch.
The precise meaning of the phrase "context switch" varies. In a multitasking context, it refers to the process of storing the system state for one task, so that task can be paused and another task resumed. A context switch can also occur as the result of an interrupt, such as when a task needs to access disk storage, freeing up CPU time for other tasks. Some operating systems also require a context switch to move between user mode and kernel mode tasks. The process of context switching can have a negative impact on system performance.
Context switches are usually computationally intensive, and much of the design of operating systems is to optimize the use of context switches. Switching from one process to another requires a certain amount of time for doing the administration - saving and loading registers and memory maps, updating various tables and lists, etc. What is actually involved in a context switch depends on the architectures, operating systems, and the number of resources shared (threads that belong to the same process share many resources compared to unrelated non-cooperating processes).
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
To efficiently program embedded systems an understanding of their architectures is required. After following this course students will be able to take an existing SoC, understand its architecture, and
The purpose of this course is to discuss the design of database and operating systems concepts using a hands-on
approach.
Multiprocessors are a core component in all types of computing infrastructure, from phones to datacenters. This course will build on the prerequisites of processor design and concurrency to introduce
Explore l'exécution des processus, la protection et les mécanismes d'exploitation efficaces pour des environnements multi-programmes sécurisés et fiables.
Introduit les concepts fondamentaux de programmation dans les systèmes d'exploitation, couvrant l'exécution directe limitée, les anneaux de protection, le changement de contexte et diverses politiques de programmation.
En informatique, un système d'exploitation (souvent appelé OS — de l'anglais operating system — ou parfois SE — en français) est un ensemble de programmes qui dirige l'utilisation des ressources d'un ordinateur par des logiciels applicatifs. Il reçoit des demandes d'utilisation des ressources de l'ordinateur de la part des logiciels applicatifs. Le système d'exploitation gère les demandes ainsi que les ressources nécessaires évitant les interférences entre les logiciels.
thumb|Un processus avec deux threads. Un thread ou fil (traduction normalisés par ISO/CEI 2382-7:2000 (autres appellations connues : processus léger, fil d'exécution, fil d'instruction, processus allégé, exétron, tâche, voire unité d'exécution ou unité de traitement) est similaire à un processus car tous deux représentent l'exécution d'un ensemble d'instructions du langage machine d'un processeur. Du point de vue de l'utilisateur, ces exécutions semblent se dérouler en parallèle.
Un noyau de système d’exploitation, ou simplement noyau, ou kernel en anglais, est une des parties fondamentales de certains systèmes d’exploitation. Il gère les ressources de l’ordinateur et permet aux différents composants — matériels et logiciels — de communiquer entre eux. En tant que partie du système d’exploitation, le noyau fournit des mécanismes d’abstraction du matériel, notamment de la mémoire, du (ou des) processeur(s), et des échanges d’informations entre logiciels et périphériques matériels.
, ,
Micro-architectural behavior of traditional disk-based online transaction processing (OLTP) systems has been investigated extensively over the past couple of decades. Results show that traditional OLTP systems mostly under-utilize the available micro-archi ...
The miniaturization of integrated circuits (ICs) and their higher performance and energy efficiency, combined with new machine learning algorithms and applications, have paved the way to intelligent, interconnected edge devices. In the medical domain, they ...
EPFL2023
,
A central task in high-level synthesis is scheduling: the allocation of operations to clock cycles. The classic approach to scheduling is static, in which each operation is mapped to a clock cycle at compile-time, but recent years have seen the emergence o ...