Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture explores the conditions necessary to interpret the output of a neural network as a probability, focusing on the cross-entropy error function for classification tasks. By ensuring a large dataset and a flexible network, it becomes possible to derive probabilities from the network's output.