Memory access patternIn computing, a memory access pattern or IO access pattern is the pattern with which a system or program reads and writes memory on secondary storage. These patterns differ in the level of locality of reference and drastically affect cache performance, and also have implications for the approach to parallelism and distribution of workload in shared memory systems. Further, cache coherency issues can affect multiprocessor performance, which means that certain memory access patterns place a ceiling on parallelism (which manycore approaches seek to break).
Direct memory accessDirect memory access (DMA) is a feature of computer systems that allows certain hardware subsystems to access main system memory independently of the central processing unit (CPU). Without DMA, when the CPU is using programmed input/output, it is typically fully occupied for the entire duration of the read or write operation, and is thus unavailable to perform other work. With DMA, the CPU first initiates the transfer, then it does other operations while the transfer is in progress, and it finally receives an interrupt from the DMA controller (DMAC) when the operation is done.
Energy povertyEnergy poverty is lack of access to modern energy services. It refers to the situation of large numbers of people in developing countries and some people in developed countries whose well-being is negatively affected by very low consumption of energy, use of dirty or polluting fuels, and excessive time spent collecting fuel to meet basic needs. Today, 759 million people lack access to consistent electricity and 2.6 billion people use dangerous and inefficient cooking systems.
Cache-oblivious algorithmIn computing, a cache-oblivious algorithm (or cache-transcendent algorithm) is an algorithm designed to take advantage of a processor cache without having the size of the cache (or the length of the cache lines, etc.) as an explicit parameter. An optimal cache-oblivious algorithm is a cache-oblivious algorithm that uses the cache optimally (in an asymptotic sense, ignoring constant factors). Thus, a cache-oblivious algorithm is designed to perform well, without modification, on multiple machines with different cache sizes, or for a memory hierarchy with different levels of cache having different sizes.
Nehalem (microarchitecture)Nehalem nəˈheɪləm is the codename for Intel's 45 nm microarchitecture released in November 2008. It was used in the first-generation of the Intel Core i5 and i7 processors, and succeeds the older Core microarchitecture used on Core 2 processors. The term "Nehalem" comes from the Nehalem River. Nehalem is built on the 45 nm process, is able to run at higher clock speeds, and is more energy-efficient than Penryn microprocessors. Hyper-threading is reintroduced, along with a reduction in L2 cache size, as well as an enlarged L3 cache that is shared among all cores.
CeleronCeleron is a series of low-end IA-32 and x86-64 computer microprocessor models targeted at low-cost personal computers, manufactured by Intel. The first Celeron-branded CPU was introduced in April 15, 1998, and was based on the Pentium II. Celeron-branded processors released from 2009 to 2022 are compatible with IA-32 software. They typically offer less performance per clock speed compared to flagship Intel CPU lines, such as the Pentium or Core brands. They often have less cache or intentionally disabled advanced features, with variable impact on performance.
OpteronOpteron is AMD's x86 former server and workstation processor line, and was the first processor which supported the AMD64 instruction set architecture (known generically as x86-64). It was released on April 22, 2003, with the SledgeHammer core (K8) and was intended to compete in the server and workstation markets, particularly in the same segment as the Intel Xeon processor. Processors based on the AMD K10 microarchitecture (codenamed Barcelona) were announced on September 10, 2007, featuring a new quad-core configuration.
Uniform memory accessUniform memory access (UMA) is a shared memory architecture used in parallel computers. All the processors in the UMA model share the physical memory uniformly. In an UMA architecture, access time to a memory location is independent of which processor makes the request or which memory chip contains the transferred data. Uniform memory access computer architectures are often contrasted with non-uniform memory access (NUMA) architectures. In the NUMA architecture, each processor may use a private cache.
PredictionA prediction (Latin præ-, "before," and dicere, "to say"), or forecast, is a statement about a future event or data. They are often, but not always, based upon experience or knowledge. There is no universal agreement about the exact difference from "estimation"; different authors and disciplines ascribe different connotations. Future events are necessarily uncertain, so guaranteed accurate information about the future is impossible. Prediction can be useful to assist in making plans about possible developments.
TransmetaTransmeta Corporation was an American fabless semiconductor company based in Santa Clara, California. It developed low power x86 compatible microprocessors based on a VLIW core and a software layer called Code Morphing Software. Code Morphing Software (CMS) consisted of an interpreter, a runtime system and a dynamic binary translator. x86 instructions were first interpreted one instruction at a time and profiled, then depending upon the frequency of execution of a code block, CMS would progressively generate more optimized translations.