Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
Our brain continuously self-organizes to construct and maintain an internal representation of the world based on the information arriving through sensory stimuli. Remarkably, cortical areas related to different sensory modalities appear to share the same f ...
Reinforcement learning is a type of supervised learning, where reward is sparse and delayed. For example in chess, a series of moves is made until a sparse reward (win, loss) is issued, which makes it impossible to evaluate the value of a single move. Stil ...
Individual behavioral performance during learning is known to be affected by modulatory factors, such as stress and motivation, and by genetic predispositions that influence sensitivity to these factors. Despite numerous studies, no integrative framework i ...
How do animals learn to repeat behaviors that lead to the obtention of food or other “rewarding” objects? As a biologically plausible paradigm for learning in spiking neural networks, spike-timing dependent plasticity (STDP) has been shown to perform well ...
Acute stress regulates different aspects of behavioral learning through the action of stress hormones and neuromodulators. Stress effects depend on stressor's type, intensity, timing, and the learning paradigm. In addition, genetic background of animals mi ...
We study the use and impact of a dictionary in a tomographic reconstruction setup. First, we build two different dictionaries: one using a set of bases functions (Discrete Cosine Transform), and the other that is learned using patches extracted from traini ...
Spie-Int Soc Optical Engineering, Po Box 10, Bellingham, Wa 98227-0010 Usa2011
In the Bayesian approach to sequential decision making, exact calculation of the (subjective) utility is intractable. This extends to most special cases of interest, such as reinforcement learning problems. While utility bounds are known to exist for this ...
Reinforcement learning in neural networks requires a mechanism for exploring new network states in response to a single, nonspecific reward signal. Existing models have introduced synaptic or neuronal noise to drive this exploration. However, those types o ...
In reinforcement learning, agents learn by performing actions and observing their outcomes. Sometimes, it is desirable for a human operator to \textit{interrupt} an agent in order to prevent dangerous situations from happening. Yet, as part of their learni ...
This paper introduces a set of algorithms for Monte-Carlo Bayesian reinforcement learning. Firstly, Monte-Carlo estimation of upper bounds on the Bayes-optimal value function is employed to construct an optimistic policy. Secondly, gradient-based algorithm ...