This lecture covers the learning process of weights in a multi-layer feedforward neural network, focusing on back-propagation from output to hidden neurons. It explains continuous activation functions like Sigmoid and Radial Basis Functions, the weights update rule, and the backpropagation of errors to hidden units. The lecture also discusses the backpropagation algorithm, application to classification, activation functions like Sigmoid and ReLu, weights initialization methods, Kullback-Leibler divergence, entropy, cross entropy, and the effects of the learning rate.