Noise figureNoise figure (NF) and noise factor (F) are figures of merit that indicate degradation of the signal-to-noise ratio (SNR) that is caused by components in a signal chain. These figures of merit are used to evaluate the performance of an amplifier or a radio receiver, with lower values indicating better performance. The noise factor is defined as the ratio of the output noise power of a device to the portion thereof attributable to thermal noise in the input termination at standard noise temperature T0 (usually 290 K).
Huffman codingIn computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code is Huffman coding, an algorithm developed by David A. Huffman while he was a Sc.D. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes". The output from Huffman's algorithm can be viewed as a variable-length code table for encoding a source symbol (such as a character in a file).
Chirped pulse amplificationChirped pulse amplification (CPA) is a technique for amplifying an ultrashort laser pulse up to the petawatt level, with the laser pulse being stretched out temporally and spectrally, then amplified, and then compressed again. The stretching and compression uses devices that ensure that the different color components of the pulse travel different distances. CPA for lasers was introduced by Donna Strickland and Gérard Mourou at the University of Rochester in the mid-1980s, work for which they received the Nobel Prize in Physics in 2018.
Superheterodyne receiverA superheterodyne receiver, often shortened to superhet, is a type of radio receiver that uses frequency mixing to convert a received signal to a fixed intermediate frequency (IF) which can be more conveniently processed than the original carrier frequency. It was long believed to have been invented by US engineer Edwin Armstrong, but after some controversy the earliest patent for the invention is now credited to French radio engineer and radio manufacturer Lucien Lévy. Virtually all modern radio receivers use the superheterodyne principle.
Radio receiverIn radio communications, a radio receiver, also known as a receiver, a wireless, or simply a radio, is an electronic device that receives radio waves and converts the information carried by them to a usable form. It is used with an antenna. The antenna intercepts radio waves (electromagnetic waves of radio frequency) and converts them to tiny alternating currents which are applied to the receiver, and the receiver extracts the desired information.
Homodyne detectionIn electrical engineering, homodyne detection is a method of extracting information encoded as modulation of the phase and/or frequency of an oscillating signal, by comparing that signal with a standard oscillation that would be identical to the signal if it carried null information. "Homodyne" signifies a single frequency, in contrast to the dual frequencies employed in heterodyne detection. When applied to processing of the reflected signal in remote sensing for topography, homodyne detection lacks the ability of heterodyne detection to determine the size of a static discontinuity in elevation between two locations.
Optical heterodyne detectionOptical heterodyne detection is a method of extracting information encoded as modulation of the phase, frequency or both of electromagnetic radiation in the wavelength band of visible or infrared light. The light signal is compared with standard or reference light from a "local oscillator" (LO) that would have a fixed offset in frequency and phase from the signal if the latter carried null information. "Heterodyne" signifies more than one frequency, in contrast to the single frequency employed in homodyne detection.
Entropy codingIn information theory, an entropy coding (or entropy encoding) is any lossless data compression method that attempts to approach the lower bound declared by Shannon's source coding theorem, which states that any lossless data compression method must have expected code length greater or equal to the entropy of the source. More precisely, the source coding theorem states that for any source distribution, the expected code length satisfies , where is the number of symbols in a code word, is the coding function, is the number of symbols used to make output codes and is the probability of the source symbol.
Linear codeIn coding theory, a linear code is an error-correcting code for which any linear combination of codewords is also a codeword. Linear codes are traditionally partitioned into block codes and convolutional codes, although turbo codes can be seen as a hybrid of these two types. Linear codes allow for more efficient encoding and decoding algorithms than other codes (cf. syndrome decoding). Linear codes are used in forward error correction and are applied in methods for transmitting symbols (e.g.
Quantization (signal processing)Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.