Randomized algorithmA randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output (or both) are random variables.
Analysis of algorithmsIn computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input.
Bayesian statisticsBayesian statistics (ˈbeɪziən or ˈbeɪʒən ) is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation that views probability as the limit of the relative frequency of an event after many trials.
Feature engineeringFeature engineering or feature extraction or feature discovery is the process of extracting features (characteristics, properties, attributes) from raw data. Due to deep learning networks, such as convolutional neural networks, that are able to learn it by itself, domain-specific- based feature engineering has become obsolete for vision and speech processing.
Algorithmic efficiencyIn computer science, algorithmic efficiency is a property of an algorithm which relates to the amount of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on the usage of different resources. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process. For maximum efficiency it is desirable to minimize resource usage.
Multilinear subspace learningMultilinear subspace learning is an approach for disentangling the causal factor of data formation and performing dimensionality reduction. The Dimensionality reduction can be performed on a data tensor that contains a collection of observations have been vectorized, or observations that are treated as matrices and concatenated into a data tensor. Here are some examples of data tensors whose observations are vectorized or whose observations are matrices concatenated into data tensor s (2D/3D), video sequences (3D/4D), and hyperspectral cubes (3D/4D).
Ensemble learningIn statistics and machine learning, ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Unlike a statistical ensemble in statistical mechanics, which is usually infinite, a machine learning ensemble consists of only a concrete finite set of alternative models, but typically allows for much more flexible structure to exist among those alternatives.
N200 (neuroscience)The N200, or N2, is an event-related potential (ERP) component. An ERP can be monitored using a non-invasive electroencephalography (EEG) cap that is fitted over the scalp on human subjects. An EEG cap allows researchers and clinicians to monitor the minute electrical activity that reaches the surface of the scalp from post-synaptic potentials in neurons, which fluctuate in relation to cognitive processing. EEG provides millisecond-level temporal resolution and is therefore known as one of the most direct measures of covert mental operations in the brain.
Difference due to memoryDifference due to memory (Dm) indexes differences in neural activity during the study phase of an experiment for items that subsequently are remembered compared to items that are later forgotten. It is mainly discussed as an event-related potential (ERP) effect that appears in studies employing a subsequent memory paradigm, in which ERPs are recorded when a participant is studying a list of materials and trials are sorted as a function of whether they go on to be remembered or not in the test phase.
Computational complexityIn computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem. The study of the complexity of explicitly given algorithms is called analysis of algorithms, while the study of the complexity of problems is called computational complexity theory.