Stress, individual differences, and norepinephrine in reinforcement learning-based prediction of mouse behavior in conditioning and spatial learning
Graph Chatbot
Chat with Graph Search
Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
Whether it occurs in artificial or biological substrates, {\it learning} is a {distributed} phenomenon in at least two aspects. First, meaningful data and experiences are rarely found in one location, hence {\it learners} have a strong incentive to work to ...
For decades, neuroscientists and psychologists have observed that animal performance on spatial navigation tasks suggests an internal learned map of the environment. More recently, map-based (or model-based) reinforcement learning has become a highly activ ...
When humans or animals perform an action that led to a desired outcome, they show a tendency to repeat it. The mechanisms underlying learning from past experience and adapting future behavior are still not fully understood. In this thesis, I study how huma ...
We study the problem of inverse reinforcement learning (IRL) with the added twist that the learner is assisted by a helpful teacher. More formally, we tackle the following algorithmic question: How could a teacher provide an informative sequence of demonst ...
In this paper we show how different choices regarding compliance affect a dual-arm assembly task. In addition, we present how the compliance parameters can be learned from a human demonstration. Compliant motions can be used in assembly tasks to mitigate p ...
Detection of curvilinear structures has long been of interest due to its wide range of applications. Large amounts of imaging data could be readily used in many fields, but it is practically not possible to analyze them manually. Hence, the need for automa ...
Brain-inspired Hyperdimensional (HD) computing is a promising solution for energy-efficient classification. However, the existing HD computing algorithms have a lack of controllability on the training iterations which often results in slow training or dive ...
Research on Computer-Supported Collaborative Learning (CSCL) has provided significant insights into why collaborative learning is effective and how we can effectively provide support for it. Building on this knowledge, we can investigate when collaboration ...
This chapter presents an overview of learning approaches for the acquisition of controllers and movement skills in humanoid robots. The term learning control refers to the process of acquiring a control strategy to achieve a task. While the definition is i ...
We study the problem of inverse reinforcement learning (IRL) with the added twist that the learner is assisted by a helpful teacher. More formally, we tackle the following algorithmic question: How could a teacher provide an informative sequence of demonst ...