Machine learningMachine learning (ML) is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines 'discover' their 'own' algorithms, without needing to be explicitly told what to do by any human-developed algorithms. Recently, generative artificial neural networks have been able to surpass results of many previous approaches.
Hopfield networkA Hopfield network (or Amari-Hopfield network, Ising model of a neural network or Ising–Lenz–Little model) is a form of recurrent artificial neural network and a type of spin glass system popularised by John Hopfield in 1982 as described by Shun'ichi Amari in 1972 and by Little in 1974 based on Ernst Ising's work with Wilhelm Lenz on the Ising model. Hopfield networks serve as content-addressable ("associative") memory systems with binary threshold nodes, or with continuous variables.
Visual memoryVisual memory describes the relationship between perceptual processing and the encoding, storage and retrieval of the resulting neural representations. Visual memory occurs over a broad time range spanning from eye movements to years in order to visually navigate to a previously visited location. Visual memory is a form of memory which preserves some characteristics of our senses pertaining to visual experience. We are able to place in memory visual information which resembles objects, places, animals or people in a mental image.
Music sequencerA music sequencer (or audio sequencer or simply sequencer) is a device or application software that can record, edit, or play back music, by handling note and performance information in several forms, typically CV/Gate, MIDI, or Open Sound Control (OSC), and possibly audio and automation data for digital audio workstations (DAWs) and plug-ins. The advent of Musical Instrument Digital Interface (MIDI) and the Atari ST home computer in the 1980s gave programmers the opportunity to design software that could more easily record and play back sequences of notes played or programmed by a musician.
Episodic memoryEpisodic memory is the memory of everyday events (such as times, location geography, associated emotions, and other contextual information) that can be explicitly stated or conjured. It is the collection of past personal experiences that occurred at particular times and places; for example, the party on one's 7th birthday. Along with semantic memory, it comprises the category of explicit memory, one of the two major divisions of long-term memory (the other being implicit memory).
Memory consolidationMemory consolidation is a category of processes that stabilize a memory trace after its initial acquisition. A memory trace is a change in the nervous system caused by memorizing something. Consolidation is distinguished into two specific processes. The first, synaptic consolidation, which is thought to correspond to late-phase long-term potentiation, occurs on a small scale in the synaptic connections and neural circuits within the first few hours after learning.
Feedforward neural networkA feedforward neural network (FNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. Its flow is uni-directional, meaning that the information in the model flows in only one direction—forward—from the input nodes, through the hidden nodes (if any) and to the output nodes, without any cycles or loops, in contrast to recurrent neural networks, which have a bi-directional flow.
Autobiographical memoryAutobiographical memory (AM) is a memory system consisting of episodes recollected from an individual's life, based on a combination of episodic (personal experiences and specific objects, people and events experienced at particular time and place) and semantic (general knowledge and facts about the world) memory. It is thus a type of explicit memory. Conway and Pleydell-Pearce (2000) proposed that autobiographical memory is constructed within a self-memory system (SMS), a conceptual model composed of an autobiographical knowledge base and the working self.
RhythmRhythm (from Greek ῥυθμός, rhythmos, "any regular recurring motion, symmetry") generally means a "movement marked by the regulated succession of strong and weak elements, or of opposite or different conditions". This general meaning of regular recurrence or pattern in time can apply to a wide variety of cyclical natural phenomena having a periodicity or frequency of anything from microseconds to several seconds (as with the riff in a rock music song); to several minutes or hours, or, at the most extreme, even over many years.
Transformer (machine learning model)A transformer is a deep learning architecture that relies on the parallel multi-head attention mechanism. The modern transformer was proposed in the 2017 paper titled 'Attention Is All You Need' by Ashish Vaswani et al., Google Brain team. It is notable for requiring less training time than previous recurrent neural architectures, such as long short-term memory (LSTM), and its later variation has been prevalently adopted for training large language models on large (language) datasets, such as the Wikipedia corpus and Common Crawl, by virtue of the parallelized processing of input sequence.