In signal processing, particularly , total variation denoising, also known as total variation regularization or total variation filtering, is a noise removal process (filter). It is based on the principle that signals with excessive and possibly spurious detail have high total variation, that is, the integral of the absolute is high. According to this principle, reducing the total variation of the signal—subject to it being a close match to the original signal—removes unwanted detail whilst preserving important details such as . The concept was pioneered by L. I. Rudin, S. Osher, and E. Fatemi in 1992 and so is today known as the ROF model.
This noise removal technique has advantages over simple techniques such as linear smoothing or median filtering which reduce noise but at the same time smooth away edges to a greater or lesser degree. By contrast, total variation denoising is a remarkably effective edge-preserving filter, i.e., simultaneously preserving edges whilst smoothing away noise in flat regions, even at low signal-to-noise ratios.
For a digital signal , we can, for example, define the total variation as
Given an input signal , the goal of total variation denoising is to find an approximation, call it , that has smaller total variation than but is "close" to . One measure of closeness is the sum of square errors:
So the total-variation denoising problem amounts to minimizing the following discrete functional over the signal :
By differentiating this functional with respect to , we can derive a corresponding Euler–Lagrange equation, that can be numerically integrated with the original signal as initial condition. This was the original approach. Alternatively, since this is a convex functional, techniques from convex optimization can be used to minimize it and find the solution .
The regularization parameter plays a critical role in the denoising process. When , there is no smoothing and the result is the same as minimizing the sum of squares.