Online machine learningIn computer science, online machine learning is a method of machine learning in which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once. Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms.
Stability (learning theory)Stability, also known as algorithmic stability, is a notion in computational learning theory of how a machine learning algorithm output is changed with small perturbations to its inputs. A stable learning algorithm is one for which the prediction does not change much when the training data is modified slightly. For instance, consider a machine learning algorithm that is being trained to recognize handwritten letters of the alphabet, using 1000 examples of handwritten letters and their labels ("A" to "Z") as a training set.
Pattern recognitionPattern recognition is the automated recognition of patterns and regularities in data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess (PR) capabilities but their primary function is to distinguish and create emergent pattern. PR has applications in statistical data analysis, signal processing, , information retrieval, bioinformatics, data compression, computer graphics and machine learning.
Variational autoencoderIn machine learning, a variational autoencoder (VAE) is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling. It is part of the families of probabilistic graphical models and variational Bayesian methods. Variational autoencoders are often associated with the autoencoder model because of its architectural affinity, but with significant differences in the goal and mathematical formulation. Variational autoencoders are probabilistic generative models that require neural networks as only a part of their overall structure.
Social skillsA social skill is any competence facilitating interaction and communication with others where social rules and relations are created, communicated, and changed in verbal and nonverbal ways. The process of learning these skills is called socialization. Lack of such skills can cause social awkwardness. Interpersonal skills are actions used to effectively interact with others. Interpersonal skills relate to categories of dominance vs. submission, love vs. hate, affiliation vs. aggression, and control vs.
Motor learningMotor learning refers broadly to changes in an organism's movements that reflect changes in the structure and function of the nervous system. Motor learning occurs over varying timescales and degrees of complexity: humans learn to walk or talk over the course of years, but continue to adjust to changes in height, weight, strength etc. over their lifetimes. Motor learning enables animals to gain new skills, and improves the smoothness and accuracy of movements, in some cases by calibrating simple movements like reflexes.
Self-supervised learningSelf-supervised learning (SSL) is a paradigm in machine learning for processing data of lower quality, rather than improving ultimate outcomes. Self-supervised learning more closely imitates the way humans learn to classify objects. The typical SSL method is based on an artificial neural network or other model such as a decision list. The model learns in two steps. First, the task is solved based on an auxiliary or pretext classification task using pseudo-labels which help to initialize the model parameters.
Deep reinforcement learningDeep reinforcement learning (deep RL) is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the state space. Deep RL algorithms are able to take in very large inputs (e.g.
Deep learningDeep learning is part of a broader family of machine learning methods, which is based on artificial neural networks with representation learning. The adjective "deep" in deep learning refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.
Metal–organic frameworkMetal–organic frameworks (MOFs) are a class of compounds consisting of metal clusters (also known as SBUs) coordinated to organic ligands to form one-, two-, or three-dimensional structures. The organic ligands included are sometimes referred to as "struts" or "linkers", one example being 1,4-benzenedicarboxylic acid (BDC). More formally, a metal–organic framework is an organic-inorganic porous extended structure. An extended structure is a structure whose sub-units occur in a constant ratio and are arranged in a repeating pattern.