Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers the regularized cross-entropy risk in neural networks. It explains the training process of feedforward neural networks by minimizing the cross-entropy empirical risk. The lecture delves into the motivation behind cross-entropy minimization, sensitivity factors, and the challenges faced in deep networks. Practical examples and simulations are provided to illustrate the concepts discussed.