This lecture covers the regularized cross-entropy risk in neural networks. It explains the training process of feedforward neural networks by minimizing the cross-entropy empirical risk. The lecture delves into the motivation behind cross-entropy minimization, sensitivity factors, and the challenges faced in deep networks. Practical examples and simulations are provided to illustrate the concepts discussed.