Synthetic dataSynthetic data is information that's artificially generated rather than produced by real-world events. Typically created using algorithms, synthetic data can be deployed to validate mathematical models and to train machine learning models. Data generated by a computer simulation can be seen as synthetic data. This encompasses most applications of physical modeling, such as music synthesizers or flight simulators. The output of such systems approximates the real thing, but is fully algorithmically generated.
Analyse factorielleL'analyse factorielle est un terme qui désigne aujourd'hui plusieurs méthodes d'analyses de grands tableaux rectangulaires de données, visant à déterminer et à hiérarchiser des facteurs corrélés aux données placées en colonnes. Au sens anglo-saxon du terme, l'analyse factorielle (factor analysis) désigne une méthode de la famille de la statistique multivariée, utilisée pour décrire un ensemble de variables observées, au moyen de variables latentes (non observées).
Predictive modellingPredictive modelling uses statistics to predict outcomes. Most often the event one wants to predict is in the future, but predictive modelling can be applied to any type of unknown event, regardless of when it occurred. For example, predictive models are often used to detect crimes and identify suspects, after the crime has taken place. In many cases, the model is chosen on the basis of detection theory to try to guess the probability of an outcome given a set amount of input data, for example given an email determining how likely that it is spam.
Variable latenteIn statistics, latent variables (from Latin: present participle of lateo, “lie hidden”) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured. Such latent variable models are used in many disciplines, including political science, demography, engineering, medicine, ecology, physics, machine learning/artificial intelligence, bioinformatics, chemometrics, natural language processing, management, psychology and the social sciences.
Big dataLe big data ( « grosses données » en anglais), les mégadonnées ou les données massives, désigne les ressources d’informations dont les caractéristiques en termes de volume, de vélocité et de variété imposent l’utilisation de technologies et de méthodes analytiques particulières pour créer de la valeur, et qui dépassent en général les capacités d'une seule et unique machine et nécessitent des traitements parallélisés. L’explosion quantitative (et souvent redondante) des données numériques permet une nouvelle approche pour analyser le monde.
Signal separationSource separation, blind signal separation (BSS) or blind source separation, is the separation of a set of source signals from a set of mixed signals, without the aid of information (or with very little information) about the source signals or the mixing process. It is most commonly applied in digital signal processing and involves the analysis of mixtures of signals; the objective is to recover the original component signals from a mixture signal.
Nonlinear dimensionality reductionNonlinear dimensionality reduction, also known as manifold learning, refers to various related techniques that aim to project high-dimensional data onto lower-dimensional latent manifolds, with the goal of either visualizing the data in the low-dimensional space, or learning the mapping (either from the high-dimensional space to the low-dimensional embedding or vice versa) itself. The techniques described below can be understood as generalizations of linear decomposition methods used for dimensionality reduction, such as singular value decomposition and principal component analysis.
MM algorithmThe MM algorithm is an iterative optimization method which exploits the convexity of a function in order to find its maxima or minima. The MM stands for “Majorize-Minimization” or “Minorize-Maximization”, depending on whether the desired optimization is a minimization or a maximization. Despite the name, MM itself is not an algorithm, but a description of how to construct an optimization algorithm. The expectation–maximization algorithm can be treated as a special case of the MM algorithm.
Low-rank approximationIn mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix (the data) and an approximating matrix (the optimization variable), subject to a constraint that the approximating matrix has reduced rank. The problem is used for mathematical modeling and data compression. The rank constraint is related to a constraint on the complexity of a model that fits the data. In applications, often there are other constraints on the approximating matrix apart from the rank constraint, e.
Analyse canonique des corrélationsL'analyse canonique des corrélations, parfois aussi nommé analyse des corrélations canoniques, (canonical-correlation analysis en anglais) permet de comparer deux groupes de variables quantitatives appliqués tous deux sur les mêmes individus. Le but de l'analyse canonique est de comparer ces deux groupes de variables pour savoir s'ils décrivent un même phénomène, auquel cas on pourra se passer d'un des deux groupes de variables. Un exemple parlant est celui des analyses médicales effectuées sur les mêmes échantillons par deux laboratoires différents.