Delves into using simulations for Human-Robot Interaction, learning from human expertise and preferences, user models, system models, simulation results, and assisting drone landings.
Explores challenges and opportunities in vision-based robotic perception, covering topics like SLAM, place recognition, event cameras, and collaborative visual intelligence.
Explores the boundary between hard coding and learning in robotics, emphasizing the importance of co-designing robotic hands and manipulation approaches to exploit environmental constraints.
Delves into training and applications of Vision-Language-Action models, emphasizing large language models' role in robotic control and the transfer of web knowledge. Results from experiments and future research directions are highlighted.
Explores motor neuroprosthetics, covering peripheral nervous system, motor decoding, robotic hands, and sensory feedback through advanced techniques and implantable systems.