Publication

Enactive Robot Vision

Dario Floreano, Mototaka Suzuki
2008
Journal paper
Abstract

Enactivism claims that sensory-motor activity and embodiment are crucial in perceiving the environment and that machine vision could be a much simpler business if considered in this context. However, computational models of enactive vision are very rare and often rely on handcrafted control systems. In this paper, we describe results from experiments where evolutionary robots can choose whether to exploit sensory motor coordination in a set of vision- based tasks. We show that complex visual tasks can be tackled with remarkably simple neural architectures generated by a co-evolutionary process of active vision and feature selection. We describe the application of this methodology in four sets of experiments, namely shape discrimination, car driving, and wheeled/bipedal robot navigation. A further set of experiments where the visual system can develop the receptive fields by means of unsupervised Hebbian learning, demonstrates that the receptive fields are significantly affected by the behavior of the system and differ from those predicted by most computational models of visual cortex. Finally, we show that our robots can also replicate the performance deficiencies observed in experiments of sensory deprivation with kitten.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.