Model-based reinforcement learning and navigation in animals and machines
Graph Chatbot
Chat with Graph Search
Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
RATIONALE: To determine the suitability of the nine-hole box to characterise mouse performance on a free operant task and a discrete trials task, and to validate the tests by probing whether d-amphetamine and scopolamine modify performance of the task as p ...
Scaling down robots to miniature size introduces many new challenges including memory and program size limitations, low processor performance and low power autonomy. In this paper we describe the concept and implementation of learning of a safewandering ta ...
Activation dynamics of hippocampal subregions during spatial learning and their interplay with neocortical regions is an important dimension in the understanding of hippocampal function. Using the (14C)-2-deoxyglucose autoradiographic method, we have chara ...
By using other agents experiences and knowledge, a learning agent may learn faster, make fewer mistakes, and create some rules for unseen situations. These benefits would be gained if the learning agent can extract proper rules out of the other agents know ...
We study spatial learning and navigation for autonomous agents. A state space representation is constructed by unsupervised Hebbian learning during exploration. As a result of learning, a representation of the continuous two-dimensional (2-D) manifold in t ...
Institute of Electrical and Electronics Engineers2004
A biologically inspired computational model of rodent repre-sentation?based (locale) navigation is presented. The model combines visual input in the form of realistic two dimensional grey-scale images and odometer signals to drive the firing of simulated p ...
When animals explore an environment, they store useful spatial information in their brains. In subsequent visits, they can recall this information and thus avoid dangerous places or find again a food location. This ability, which may be crucial for the ani ...
In [8] Yamauchi and Beer explored the abilities of continuous time recurrent neural networks (CTRNNs) to display reinforcement-learning like abilities. The investigated tasks were generation and learning of short bit sequences. This "learning'' came about ...
Our research focuses on the behavioral animation of virtual humans who are capable of taking actions by themselves. We deal more specifically with reinforcement learning methodologies, which integrate in an original way the RL agent and the autonomous virt ...
Scaling down robots to miniature size introduces many new challenges including memory and program size limitations, low processor performance and low power autonomy. In this paper we describe the concept and implementation of learning of safe-wandering and ...