A neuronal ensemble is a population of nervous system cells (or cultured neurons) involved in a particular neural computation.
The concept of neuronal ensemble dates back to the work of Charles Sherrington who described the functioning of the CNS as the system of reflex arcs, each composed of interconnected excitatory and inhibitory neurons. In Sherrington's scheme, α-motoneurons are the final common path of a number of neural circuits of different complexity: motoneurons integrate a large number of inputs and send their final output to muscles.
Donald Hebb theoretically developed the concept of neuronal ensemble in his famous book "The Organization of Behavior" (1949). He defined "cell assembly" as "a diffuse structure comprising cells in the cortex and diencephalon, capable of acting briefly as a closed system, delivering facilitation to other such systems". Hebb suggested that, depending on functional requirements, individual brain cells could participate in different cell assemblies and be involved in multiple computations.
In the 1980s, Apostolos Georgopoulos and his colleagues Ron Kettner, Andrew Schwartz, and Kenneth Johnson formulated a population vector hypothesis to explain how populations of motor cortex neurons encode movement direction. This hypothesis was based on the observation that individual neurons tended to discharge more for movements in particular directions, the so-called preferred directions for individual neurons. In the population vector model, individual neurons 'vote' for their preferred directions using their firing rate. The final vote is calculated by vectorial summation of individual preferred directions weighted by neuronal rates. This model proved to be successful in description of motor-cortex encoding of reach direction, and it was also capable to predict new effects. For example, Georgopoulos's population vector accurately described mental rotations made by the monkeys that were trained to translate locations of visual stimuli into spatially shifted locations of reach targets.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Neural coding (or neural representation) is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the individual or ensemble neuronal responses and the relationship among the electrical activity of the neurons in the ensemble. Based on the theory that sensory and other information is represented in the brain by networks of neurons, it is thought that neurons can encode both digital and analog information.
Microelectrode arrays (MEAs) (also referred to as multielectrode arrays) are devices that contain multiple (tens to thousands) microelectrodes through which neural signals are obtained or delivered, essentially serving as neural interfaces that connect neurons to electronic circuitry. There are two general classes of MEAs: implantable MEAs, used in vivo, and non-implantable MEAs, used in vitro. Neurons and muscle cells create ion currents through their membranes when excited, causing a change in voltage between the inside and the outside of the cell.
A brain–computer interface (BCI), sometimes called a brain–machine interface (BMI) or smartbrain, is a direct communication pathway between the brain's electrical activity and an external device, most commonly a computer or robotic limb. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions. They are often conceptualized as a human–machine interface that skips the intermediary component of the physical movement of body parts, although they also raise the possibility of the erasure of the discreteness of brain and machine.
In this course we study mathematical models of neurons and neuronal networks in the context of biology and establish links to models of cognition. The focus is on brain dynamics approximated by determ
This course focuses on the biophysical mechanisms of mammalian brain function. We will describe how neurons communicate through synaptic transmission in order to process sensory information ultimately
Neural interfaces (NI) are bioelectronic systems that interface the nervous system to digital technologies. This course presents their main building blocks (transducers, instrumentation & communicatio
This course explains the mathematical and computational models that are used in the field of theoretical neuroscience to analyze the collective dynamics of thousands of interacting neurons.
This course explains the mathematical and computational models that are used in the field of theoretical neuroscience to analyze the collective dynamics of thousands of interacting neurons.
Neural decoding of the visual system is a subject of research interest, both to understand how the visual system works and to be able to use this knowledge in areas, such as computer vision or brain-computer interfaces. Spike-based decoding is often used, ...
In humans and animals, surprise is a physiological reaction to an unexpected event, but how surprise can be linked to plausible models of neuronal activity is an open problem. We propose a self-supervised spiking neural network model where a surprise signa ...
Explores detailed modeling of ion channels and neuronal morphologies in in silico neuroscience, covering neuron classification, ion channel kinetics, and experimental observations.
Explores the potential of the LGN for artificial vision and the differences in stimulus encoding between thalamic and cortical representations.
, , , ,
Recently, cutting-edge brain-machine interfaces (BMIs) have revealed the potential of decoders such as recurrent neural networks (RNNs) in predicting attempted handwriting [1] or speech [2], enabling rapid communication recovery after paralysis. However, c ...