Action potentialAn action potential occurs when the membrane potential of a specific cell rapidly rises and falls. This depolarization then causes adjacent locations to similarly depolarize. Action potentials occur in several types of animal cells, called excitable cells, which include neurons, muscle cells, and in some plant cells. Certain endocrine cells such as pancreatic beta cells, and certain cells of the anterior pituitary gland are also excitable cells.
Homeostatic plasticityIn neuroscience, homeostatic plasticity refers to the capacity of neurons to regulate their own excitability relative to network activity. The term homeostatic plasticity derives from two opposing concepts: 'homeostatic' (a product of the Greek words for 'same' and 'state' or 'condition') and plasticity (or 'change'), thus homeostatic plasticity means "staying the same through change". Homeostatic synaptic plasticity is a means of maintaining the synaptic basis for learning, respiration, and locomotion, in contrast to the Hebbian plasticity associated with learning and memory.
Hebbian theoryHebbian theory is a neuropsychological theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process. It was introduced by Donald Hebb in his 1949 book The Organization of Behavior. The theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory.
Mutual informationIn probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the "amount of information" (in units such as shannons (bits), nats or hartleys) obtained about one random variable by observing the other random variable. The concept of mutual information is intimately linked to that of entropy of a random variable, a fundamental notion in information theory that quantifies the expected "amount of information" held in a random variable.
DendriteA dendrite (from Greek δένδρον déndron, "tree") or dendron is a branched protoplasmic extension of a nerve cell that propagates the electrochemical stimulation received from other neural cells to the cell body, or soma, of the neuron from which the dendrites project. Electrical stimulation is transmitted onto dendrites by upstream neurons (usually via their axons) via synapses which are located at various points throughout the dendritic tree.
Error functionIn mathematics, the error function (also called the Gauss error function), often denoted by erf, is a complex function of a complex variable defined as: Some authors define without the factor of . This nonelementary integral is a sigmoid function that occurs often in probability, statistics, and partial differential equations. In many of these applications, the function argument is a real number. If the function argument is real, then the function value is also real.
S phaseS phase (Synthesis Phase) is the phase of the cell cycle in which DNA is replicated, occurring between G1 phase and G2 phase. Since accurate duplication of the genome is critical to successful cell division, the processes that occur during S-phase are tightly regulated and widely conserved. G1/S transition Entry into S-phase is controlled by the G1 restriction point (R), which commits cells to the remainder of the cell-cycle if there is adequate nutrients and growth signaling.
Reinforcement learningReinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning. Reinforcement learning differs from supervised learning in not needing labelled input/output pairs to be presented, and in not needing sub-optimal actions to be explicitly corrected.
Medium spiny neuronMedium spiny neurons (MSNs), also known as spiny projection neurons (SPNs), are a special type of GABAergic inhibitory cell representing 95% of neurons within the human striatum, a basal ganglia structure. Medium spiny neurons have two primary phenotypes (characteristic types): D1-type MSNs of the direct pathway and D2-type MSNs of the indirect pathway. Most striatal MSNs contain only D1-type or D2-type dopamine receptors, but a subpopulation of MSNs exhibit both phenotypes.
Information theoryInformation theory is the mathematical study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. The field, in applied mathematics, is at the intersection of probability theory, statistics, computer science, statistical mechanics, information engineering, and electrical engineering. A key measure in information theory is entropy.