Covers information measures like entropy, Kullback-Leibler divergence, and data processing inequality, along with probability kernels and mutual information.
Introduces decision trees for classification, covering entropy, split quality, Gini index, advantages, disadvantages, and the random forest classifier.
Explores the concept of entropy expressed in bits and its relation to probability distributions, focusing on information gain and loss in various scenarios.