Generative pre-trained transformerGenerative pre-trained transformers (GPT) are a type of large language model (LLM) and a prominent framework for generative artificial intelligence. The first GPT was introduced in 2018 by OpenAI. GPT models are artificial neural networks that are based on the transformer architecture, pre-trained on large data sets of unlabelled text, and able to generate novel human-like content. As of 2023, most LLMs have these characteristics and are sometimes referred to broadly as GPTs.
Majority ruleMajority rule is the principle that the group that has the most supporters gets its way. A majority is more than half of the voters involved, and rule by such a majority is thought to be to the benefit of more than rule by less than half (a mere minority) would be. Majority rule is the binary decision rule most often used in decision-making bodies, including many legislatures of democratic nations. Where no one party wins a majority of the seats in a legislature, the majority of legislators that wields power is partly composed of members of other parties in support.
Speech recognitionSpeech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the computer science, linguistics and computer engineering fields. The reverse process is speech synthesis.
MajorityA majority, also called a simple majority or absolute majority to distinguish it from related terms, is more than half of the total. It is a subset of a set consisting of more than half of the set's elements. For example, if a group consists of 20 individuals, a majority would be 11 or more individuals, while having 10 or fewer individuals would not constitute a majority. "Majority" can be used to specify the voting requirement, as in a "majority vote", which means more than half of the votes cast.
Language modelA language model is a probabilistic model of a natural language that can generate probabilities of a series of words, based on text corpora in one or multiple languages it was trained on. Large language models, as their most advanced form, are a combination of feedforward neural networks and transformers. They have superseded recurrent neural network-based models, which had previously superseded the pure statistical models, such as word n-gram language model.
Reservoir computingReservoir computing is a framework for computation derived from recurrent neural network theory that maps input signals into higher dimensional computational spaces through the dynamics of a fixed, non-linear system called a reservoir. After the input signal is fed into the reservoir, which is treated as a "black box," a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. The first key benefit of this framework is that training is performed only at the readout stage, as the reservoir dynamics are fixed.
Deep reinforcement learningDeep reinforcement learning (deep RL) is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the state space. Deep RL algorithms are able to take in very large inputs (e.g.
Q-learningQ-learning is a model-free reinforcement learning algorithm to learn the value of an action in a particular state. It does not require a model of the environment (hence "model-free"), and it can handle problems with stochastic transitions and rewards without requiring adaptations. For any finite Markov decision process (FMDP), Q-learning finds an optimal policy in the sense of maximizing the expected value of the total reward over any and all successive steps, starting from the current state.
Attention (machine learning)Machine learning-based attention is a mechanism mimicking cognitive attention. It calculates "soft" weights for each word, more precisely for its embedding, in the context window. It can do it either in parallel (such as in transformers) or sequentially (such as recursive neural networks). "Soft" weights can change during each runtime, in contrast to "hard" weights, which are (pre-)trained and fine-tuned and remain frozen afterwards. Multiple attention heads are used in transformer-based large language models.
VotingVoting is a method by which a group, such as a meeting or an electorate, convenes together for the purpose of making a collective decision or expressing an opinion usually following discussions, debates or election campaigns. Democracies elect holders of high office by voting. Residents of a jurisdiction represented by an elected official are called "constituents", and the constituents who choose to cast a ballot for their chosen candidate are called "voters.