NeuronWithin a nervous system, a neuron, neurone, or nerve cell is an electrically excitable cell that fires electric signals called action potentials across a neural network. Neurons communicate with other cells via synapses - specialized connections that commonly use minute amounts of chemical neurotransmitters to pass the electric signal from the presynaptic neuron to the target cell through the synaptic gap. The neuron is the main component of nervous tissue in all animals except sponges and placozoa.
Hopfield networkA Hopfield network (or Amari-Hopfield network, Ising model of a neural network or Ising–Lenz–Little model) is a form of recurrent artificial neural network and a type of spin glass system popularised by John Hopfield in 1982 as described by Shun'ichi Amari in 1972 and by Little in 1974 based on Ernst Ising's work with Wilhelm Lenz on the Ising model. Hopfield networks serve as content-addressable ("associative") memory systems with binary threshold nodes, or with continuous variables.
Deep learningDeep learning is part of a broader family of machine learning methods, which is based on artificial neural networks with representation learning. The adjective "deep" in deep learning refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.
Homeostatic plasticityIn neuroscience, homeostatic plasticity refers to the capacity of neurons to regulate their own excitability relative to network activity. The term homeostatic plasticity derives from two opposing concepts: 'homeostatic' (a product of the Greek words for 'same' and 'state' or 'condition') and plasticity (or 'change'), thus homeostatic plasticity means "staying the same through change". Homeostatic synaptic plasticity is a means of maintaining the synaptic basis for learning, respiration, and locomotion, in contrast to the Hebbian plasticity associated with learning and memory.
Spiking neural networkArtificial neural network Spiking neural networks (SNNs) are artificial neural networks that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather transmit information only when a membrane potential—an intrinsic quality of the neuron related to its membrane electrical charge—reaches a specific value, called the threshold.
Deep reinforcement learningDeep reinforcement learning (deep RL) is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the state space. Deep RL algorithms are able to take in very large inputs (e.g.
Feature learningIn machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task. Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process.
Central pattern generatorCentral pattern generators (CPGs) are self-organizing biological neural circuits that produce rhythmic outputs in the absence of rhythmic input. They are the source of the tightly-coupled patterns of neural activity that drive rhythmic and stereotyped motor behaviors like walking, swimming, breathing, or chewing. The ability to function without input from higher brain areas still requires modulatory inputs, and their outputs are not fixed. Flexibility in response to sensory input is a fundamental quality of CPG-driven behavior.
Reticular formationThe reticular formation is a set of interconnected nuclei that are located throughout the brainstem. It is not anatomically well defined, because it includes neurons located in different parts of the brain. The neurons of the reticular formation make up a complex set of networks in the core of the brainstem that extend from the upper part of the midbrain to the lower part of the medulla oblongata. The reticular formation includes ascending pathways to the cortex in the ascending reticular activating system (ARAS) and descending pathways to the spinal cord via the reticulospinal tracts.
Basal gangliaThe basal ganglia (BG), or basal nuclei, are a group of subcortical nuclei, of varied origin, in the brains of vertebrates. In humans, and some primates, there are some differences, mainly in the division of the globus pallidus into an external and internal region, and in the division of the striatum. The basal ganglia are situated at the base of the forebrain and top of the midbrain. Basal ganglia are strongly interconnected with the cerebral cortex, thalamus, and brainstem, as well as several other brain areas.