differential entropy

Generalization of entropy to continuous variables.

\begin{equation} h(X) = -\int_a^b p(x) \log{p(x)} dx \end{equation}

where \(a\) and \(b\) are the bounds of \(X\).

Differential entropy can be negative, because \(p(x)\) can be less than 1.

Note that entropy itself is kind of inherently discrete, in the notion of bits to encode some data. So, the necessary bits, the discretized entropy of a cont. variable is actually

\begin{equation} h_\Delta(X) = -\int_a^b p(x) \log{p(x)} dx + (- ln \Delta) \end{equation}

Which implies that on top of the differential entropy, which gives you an infinitely precise version of your data, we require more bits to get to a certain level of accuracy (higher accuracy being smaller \(\Delta\) intervals).

An important result is the differential entropy of a gaussian.

The maximum of differential entropy is given by a gaussian variable, and not a uniform variable like with standard entropy!

Back to index