In various science/engineering applications, such as independent component analysis, , genetic analysis, speech recognition, manifold learning, and time delay estimation it is useful to estimate the differential entropy of a system or process, given some observations.
The simplest and most common approach uses histogram-based estimation, but other approaches have been developed and used, each with its own benefits and drawbacks. The main factor in choosing a method is often a trade-off between the bias and the variance of the estimate, although the nature of the (suspected) distribution of the data may also be a factor.
The histogram approach uses the idea that the differential entropy of a probability distribution for a continuous random variable ,
can be approximated by first approximating with a histogram of the observations, and then finding the discrete entropy of a quantization of
with bin probabilities given by that histogram. The histogram is itself a maximum-likelihood (ML) estimate of the discretized frequency distribution ), where is the width of the th bin. Histograms can be quick to calculate, and simple, so this approach has some attraction. However, the estimate produced is biased, and although corrections can be made to the estimate, they may not always be satisfactory.
A method better suited for multidimensional probability density functions (pdf) is to first make a pdf estimate with some method, and then, from the pdf estimate, compute the entropy. A useful pdf estimate method is e.g. Gaussian mixture modeling (GMM), where the expectation maximization (EM) algorithm is used to find an ML estimate of a weighted sum of Gaussian pdf's approximating the data pdf.
If the data is one-dimensional, we can imagine taking all the observations and putting them in order of their value. The spacing between one value and the next then gives us a rough idea of (the reciprocal of) the probability density in that region: the closer together the values are, the higher the probability density.