Lecture

Understanding Visual Cortex with Deep Learning Models

Description

This lecture discusses a review paper by Yamit and Di Carlo from 2016, focusing on using gold-driven deep learning models to understand sensory portraits. The paper demonstrates how hierarchical convolutional neural networks can model the visual cortex structure and functioning, comparing with previous models and suggesting future improvements. The lecture covers the visual ventral pathway, neural network structure, and parameter design, emphasizing the importance of task, structure, and parameters. It explains the relationship between architectural and filter parameters, showcasing how models can predict neural responses without directly training on neural data. The lecture concludes by discussing advancements in GPU programming, automatic learning procedures, and large labeled datasets, along with future perspectives on network architecture and experimental techniques to better model brain microcircuits.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.