This lecture explores the importance of image representation, starting with the evolution of ImageNet results and the use of pre-training for downstream tasks. It delves into the challenges of supervised learning, the cost of supervision, and the benefits of self-supervised learning strategies like RotNet and Jigsaw puzzles. The speaker discusses the effectiveness of different self-supervised learning approaches, the evaluation methods, and the recent advancements in SSL, including cost function-based SSL and the use of subspace in deep learning. The lecture concludes with insights on self-expressiveness, attention mechanisms, and the application of transformer-based multitask learning in image representation.