Interrupt priority levelThe interrupt priority level (IPL) is a part of the current system interrupt state, which indicates the interrupt requests that will currently be accepted. The IPL may be indicated in hardware by the registers in a programmable interrupt controller, or in software by a bitmask or integer value and source code of threads. An integer based IPL may be as small as a single bit, with just two values: 0 (all interrupts enabled) or 1 (all interrupts disabled), as in the MOS Technology 6502.
Memory management (operating systems)In operating systems, memory management is the function responsible for managing the computer's primary memory. The memory management function keeps track of the status of each memory location, either allocated or free. It determines how memory is allocated among competing processes, deciding which gets memory, when they receive it, and how much they are allowed. When memory is allocated it determines which memory locations will be assigned. It tracks when memory is freed or unallocated and updates the status.
Global Descriptor TableThe Global Descriptor Table (GDT) is a data structure used by Intel x86-family processors starting with the 80286 in order to define the characteristics of the various memory areas used during program execution, including the base address, the size, and access privileges like executability and writability. These memory areas are called segments in Intel terminology. The GDT is a table of 8-byte entries. Each entry may refer to a segment descriptor, Task State Segment (TSS), Local Descriptor Table (LDT), or call gate.
Polling (computer science)Polling, or interrogation, refers to actively sampling the status of an external device by a client program as a synchronous activity. Polling is most often used in terms of input/output (I/O), and is also referred to as polled I/O or software-driven I/O. A good example of hardware implementation is a watchdog timer. Polling is the process where the computer or controlling device waits for an external device to check for its readiness or state, often with low-level hardware.
Overlay (programming)In a general computing sense, overlaying means "the process of transferring a block of program code or other data into main memory, replacing what is already stored". Overlaying is a programming method that allows programs to be larger than the computer's main memory. An embedded system would normally use overlays because of the limitation of physical memory, which is internal memory for a system-on-chip, and the lack of virtual memory facilities.
Fair-share schedulingFair-share scheduling is a scheduling algorithm for computer operating systems in which the CPU usage is equally distributed among system users or groups, as opposed to equal distribution of resources among processes. One common method of logically implementing the fair-share scheduling strategy is to recursively apply the round-robin scheduling strategy at each level of abstraction (processes, users, groups, etc.) The time quantum required by round-robin is arbitrary, as any equal division of time will produce the same results.
CPU timeCPU time (or process time) is the amount of time for which a central processing unit (CPU) was used for processing instructions of a computer program or operating system, as opposed to elapsed time, which includes for example, waiting for input/output (I/O) operations or entering low-power (idle) mode. The CPU time is measured in clock ticks or seconds. Often, it is useful to measure CPU time as a percentage of the CPU's capacity, which is called the CPU usage. CPU time and CPU usage have two main uses.
Priority inheritanceIn real-time computing, priority inheritance is a method for eliminating unbounded priority inversion. Using this programming method, a process scheduling algorithm increases the priority of a process (A) to the maximum priority of any other process waiting for any resource on which A has a resource lock (if it is higher than the original priority of A). The basic idea of the priority inheritance protocol is that when a job blocks one or more high-priority jobs, it ignores its original priority assignment and executes its critical section at an elevated priority level.
Process control blockA process control block (PCB), also sometimes called a process descriptor, is a data structure used by computer operating systems to store all the information about a process. When a process is created (initialized or installed), the operating system creates a corresponding process control block, which specifies and tracks the process state (i.e. new, ready, running, waiting or terminated). Since it is used to track process information, the PCB plays a key role in context switching.
Memory-mapped fileA memory-mapped file is a segment of virtual memory that has been assigned a direct byte-for-byte correlation with some portion of a file or file-like resource. This resource is typically a file that is physically present on disk, but can also be a device, shared memory object, or other resource that the operating system can reference through a . Once present, this correlation between the file and the memory space permits applications to treat the mapped portion as if it were primary memory.