This lecture covers the concept of cognitive maps as studied by Edward C. Tolman, focusing on spatial learning in rats and humans. It discusses the different reward systems used in experiments and their outcomes, as well as the implications of latent learning. The lecture also delves into the topics of attention, motivation, and grouping in visual intelligence, exploring mechanisms like selective attention and alignment score functions for computing attention. Furthermore, it introduces the concept of scaled dot-product attention, explaining self-attention and cross-attention, and how attention is computed using queries, keys, and values. The lecture concludes with an overview of transformers in machine learning, detailing the transformer encoder and decoder, and emphasizing the importance of positional encodings in maintaining input order.
This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.
Watch on Mediaspace