Information theoryInformation theory is the mathematical study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. The field, in applied mathematics, is at the intersection of probability theory, statistics, computer science, statistical mechanics, information engineering, and electrical engineering. A key measure in information theory is entropy.
Phased arrayIn antenna theory, a phased array usually means an electronically scanned array, a computer-controlled array of antennas which creates a beam of radio waves that can be electronically steered to point in different directions without moving the antennas. In a simple array antenna, the radio frequency current from the transmitter is fed to multiple individual antenna elements with the proper phase relationship so that the radio waves from the separate elements combine (superpose) to form beams, to increase power radiated in desired directions and suppress radiation in undesired directions.
Neuronal ensembleA neuronal ensemble is a population of nervous system cells (or cultured neurons) involved in a particular neural computation. The concept of neuronal ensemble dates back to the work of Charles Sherrington who described the functioning of the CNS as the system of reflex arcs, each composed of interconnected excitatory and inhibitory neurons. In Sherrington's scheme, α-motoneurons are the final common path of a number of neural circuits of different complexity: motoneurons integrate a large number of inputs and send their final output to muscles.
Randomized algorithmA randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output (or both) are random variables.
Channel capacityChannel capacity, in electrical engineering, computer science, and information theory, is the tight upper bound on the rate at which information can be reliably transmitted over a communication channel. Following the terms of the noisy-channel coding theorem, the channel capacity of a given channel is the highest information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability. Information theory, developed by Claude E.
Array slicingIn computer programming, array slicing is an operation that extracts a subset of elements from an array and packages them as another array, possibly in a different dimension from the original. Common examples of array slicing are extracting a substring from a string of characters, the "ell" in "hello", extracting a row or column from a two-dimensional array, or extracting a vector from a matrix. Depending on the programming language, an array slice can be made out of non-consecutive elements.
Feature (machine learning)In machine learning and pattern recognition, a feature is an individual measurable property or characteristic of a phenomenon. Choosing informative, discriminating and independent features is a crucial element of effective algorithms in pattern recognition, classification and regression. Features are usually numeric, but structural features such as strings and graphs are used in syntactic pattern recognition. The concept of "feature" is related to that of explanatory variable used in statistical techniques such as linear regression.
Feature selectionFeature selection is the process of selecting a subset of relevant features (variables, predictors) for use in model construction. Stylometry and DNA microarray analysis are two cases where feature selection is used. It should be distinguished from feature extraction. Feature selection techniques are used for several reasons: simplification of models to make them easier to interpret by researchers/users, shorter training times, to avoid the curse of dimensionality, improve data's compatibility with a learning model class, encode inherent symmetries present in the input space.
Global optimizationGlobal optimization is a branch of applied mathematics and numerical analysis that attempts to find the global minima or maxima of a function or a set of functions on a given set. It is usually described as a minimization problem because the maximization of the real-valued function is equivalent to the minimization of the function . Given a possibly nonlinear and non-convex continuous function with the global minima and the set of all global minimizers in , the standard minimization problem can be given as that is, finding and a global minimizer in ; where is a (not necessarily convex) compact set defined by inequalities .
Convex optimizationConvex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.