Tornado codeIn coding theory, Tornado codes are a class of erasure codes that support error correction. Tornado codes require a constant C more redundant blocks than the more data-efficient Reed–Solomon erasure codes, but are much faster to generate and can fix erasures faster. Software-based implementations of tornado codes are about 100 times faster on small lengths and about 10,000 times faster on larger lengths than Reed–Solomon erasure codes. Since the introduction of Tornado codes, many other similar erasure codes have emerged, most notably Online codes, LT codes and Raptor codes.
Die shrinkThe term die shrink (sometimes optical shrink or process shrink) refers to the scaling of metal–oxide–semiconductor (MOS) devices. The act of shrinking a die creates a somewhat identical circuit using a more advanced fabrication process, usually involving an advance of lithographic nodes. This reduces overall costs for a chip company, as the absence of major architectural changes to the processor lowers research and development costs while at the same time allowing more processor dies to be manufactured on the same piece of silicon wafer, resulting in less cost per product sold.
Communication protocolA communication protocol is a system of rules that allows two or more entities of a communications system to transmit information via any variation of a physical quantity. The protocol defines the rules, syntax, semantics, and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both. Communicating systems use well-defined formats for exchanging various messages.
Concatenated error correction codeIn coding theory, concatenated codes form a class of error-correcting codes that are derived by combining an inner code and an outer code. They were conceived in 1966 by Dave Forney as a solution to the problem of finding a code that has both exponentially decreasing error probability with increasing block length and polynomial-time decoding complexity. Concatenated codes became widely used in space communications in the 1970s.
Ethernet over twisted pairEthernet over twisted-pair technologies use twisted-pair cables for the physical layer of an Ethernet computer network. They are a subset of all Ethernet physical layers. Early Ethernet used various grades of coaxial cable, but in 1984, StarLAN showed the potential of simple unshielded twisted pair. This led to the development of 10BASE-T and its successors 100BASE-TX, 1000BASE-T and 10GBASE-T, supporting speeds of 10 and 100 megabit per second, then 1 and 10 gigabit per second respectively.
Polycrystalline siliconPolycrystalline silicon, or multicrystalline silicon, also called polysilicon, poly-Si, or mc-Si, is a high purity, polycrystalline form of silicon, used as a raw material by the solar photovoltaic and electronics industry. Polysilicon is produced from metallurgical grade silicon by a chemical purification process, called the Siemens process. This process involves distillation of volatile silicon compounds, and their decomposition into silicon at high temperatures. An emerging, alternative process of refinement uses a fluidized bed reactor.
Small Form-factor PluggableSmall Form-factor Pluggable (SFP) is a compact, hot-pluggable network interface module format used for both telecommunication and data communications applications. An SFP interface on networking hardware is a modular slot for a media-specific transceiver, such as for a fiber-optic cable or a copper cable. The advantage of using SFPs compared to fixed interfaces (e.g. modular connectors in Ethernet switches) is that individual ports can be equipped with different types of transceivers as required.
Data communicationData communication or digital communications, including data transmission and data reception, is the transfer and reception of data in the form of a digital bitstream or a digitized analog signal transmitted over a point-to-point or point-to-multipoint communication channel. Examples of such channels are copper wires, optical fibers, wireless communication using radio spectrum, storage media and computer buses. The data are represented as an electromagnetic signal, such as an electrical voltage, radiowave, microwave, or infrared signal.
Belief propagationBelief propagation, also known as sum–product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for each unobserved node (or variable), conditional on any observed nodes (or variables). Belief propagation is commonly used in artificial intelligence and information theory, and has demonstrated empirical success in numerous applications, including low-density parity-check codes, turbo codes, free energy approximation, and satisfiability.
Network congestionNetwork congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.