Abelian varietyIn mathematics, particularly in algebraic geometry, complex analysis and algebraic number theory, an abelian variety is a projective algebraic variety that is also an algebraic group, i.e., has a group law that can be defined by regular functions. Abelian varieties are at the same time among the most studied objects in algebraic geometry and indispensable tools for much research on other topics in algebraic geometry and number theory. An abelian variety can be defined by equations having coefficients in any field; the variety is then said to be defined over that field.
Deep learningDeep learning is part of a broader family of machine learning methods, which is based on artificial neural networks with representation learning. The adjective "deep" in deep learning refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.
Algebraic varietyAlgebraic varieties are the central objects of study in algebraic geometry, a sub-field of mathematics. Classically, an algebraic variety is defined as the set of solutions of a system of polynomial equations over the real or complex numbers. Modern definitions generalize this concept in several different ways, while attempting to preserve the geometric intuition behind the original definition. Conventions regarding the definition of an algebraic variety differ slightly.
Projective varietyIn algebraic geometry, a projective variety over an algebraically closed field k is a subset of some projective n-space over k that is the zero-locus of some finite family of homogeneous polynomials of n + 1 variables with coefficients in k, that generate a prime ideal, the defining ideal of the variety. Equivalently, an algebraic variety is projective if it can be embedded as a Zariski closed subvariety of .
Types of artificial neural networksThere are many types of artificial neural networks (ANN). Artificial neural networks are computational models inspired by biological neural networks, and are used to approximate functions that are generally unknown. Particularly, they are inspired by the behaviour of neurons and the electrical signals they convey between input (such as from the eyes or nerve endings in the hand), processing, and output from the brain (such as reacting to light, touch, or heat). The way neurons semantically communicate is an area of ongoing research.
Chow varietyIn mathematics, particularly in the field of algebraic geometry, a Chow variety is an algebraic variety whose points correspond to effective algebraic cycles of fixed dimension and degree on a given projective space. More precisely, the Chow variety is the fine moduli variety parametrizing all effective algebraic cycles of dimension and degree in . The Chow variety may be constructed via a Chow embedding into a sufficiently large projective space.
Semantic similaritySemantic similarity is a metric defined over a set of documents or terms, where the idea of distance between items is based on the likeness of their meaning or semantic content as opposed to lexicographical similarity. These are mathematical tools used to estimate the strength of the semantic relationship between units of language, concepts or instances, through a numerical description obtained according to the comparison of information supporting their meaning or describing their nature.
Neural machine translationNeural machine translation (NMT) is an approach to machine translation that uses an artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model. They require only a fraction of the memory needed by traditional statistical machine translation (SMT) models. Furthermore, unlike conventional translation systems, all parts of the neural translation model are trained jointly (end-to-end) to maximize the translation performance.
Natural language processingNatural language processing (NLP) is an interdisciplinary subfield of linguistics and computer science. It is primarily concerned with processing natural language datasets, such as text corpora or speech corpora, using either rule-based or probabilistic (i.e. statistical and, most recently, neural network-based) machine learning approaches. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them.
Deep reinforcement learningDeep reinforcement learning (deep RL) is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the state space. Deep RL algorithms are able to take in very large inputs (e.g.