Monday 14 October 2013

Total Variation Denoising

Lately I became interested in optical flow estimation and met Thomas Pock who answered nearly every of my questions by "Yes, this can be done using variational methods" :) So I started to look for papers, found out there are many and was a bit lost until I found the paper: A. Chambolle, T. Pock. A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging. Journal of Mathematical Imaging and Vision, 2011. Before implementing the optical flow I implemented all the examples for denoising, which is based on the same algorithm and is easier to grasp at the beginning.

The Matlab code can be downloaded from GitHub:

git clone https://github.com/JanSochman/TVdenoising.git

Feel free to use it.


Implemented total variational models for denoising
  • ROF model (preserves discontinuities due to L1 norm) $$ \min_{u\in X} ||\nabla u||_1 + \frac{\lambda}{2}||u - g||_2^2 $$
  • TV-L1 ROF (adds robustness to noise outliers) $$ \min_{u\in X} ||\nabla u||_1 + \lambda||u - g||_1 $$
  • Huber-ROF (avoids staircasing effect of TV methods) $$ \min_{u\in X} |\nabla u|_\alpha + \frac{\lambda}{2}||u - g||_2^2 \quad \quad |x|_\alpha = \left\{\begin{array}{lr} |x|^2/(2\alpha), \, & |x| \leq \alpha\\ |x| - \alpha/2, \, & |x| > \alpha \end{array} \right. $$

Where $g$ is the input image, $u$ the sought solution, and $X$ is the set of $M\times N$ images.

The optimisation
  • All problems are convex but only Huber-ROF is smooth
  • All problems can be expressed as $$ \min_{x\in X} F(Kx) + G(x) $$ with corresponding primal-dual saddle-point problem being $$ \min_{x\in X} \max_{y\in Y} \langle Kx,y\rangle + G(x) - F^{*}(y) $$
  • This can be solved by an iterative primal-dual algorithm (see the paper)
    • Initialisation: Choose $\tau$, $\sigma \gt 0$, $\theta \in [0, 1]$, $(x^0,y^0)\in X\times Y$ and set $\bar{x}^0 = x^0$.
    • Iterations ($n\ge 0$): Update $x^n$, $y^n$, $\bar{x}^n$ as follows: $$\left\{ \begin{array}{l} y^{n+1} = (I + \sigma\partial F^*)^{-1}(y^n + \sigma K \bar{x}^n)\\ x^{n+1} = (I + \tau \partial G)^{-1}(x^n-\tau K^*y^{n+1})\\ \bar{x}^{n+1} = x^{n+1} + \theta (x^{n+1}-x^n)\end{array}\right.$$
  • Two more variants with faster convergence proposed in the paper in case $G$ or/and $F^*$ are uniformly convex (used when applicable)

Methods comparison: salt & pepper noise

original image salt & pepper noise ROF (non-robust)
TV-L1 ROF
(robust but staircasing)
Huber ROF
(smooth but non-robust)
Huber-L1 ROF
(smooth and robust)


Methods comparison: gaussian noise

original image gaussian noise ROF (non-robust)
TV-L1 ROF
(robust but staircasing)
Huber ROF
(smooth but non-robust)
Huber-L1 ROF
(smooth and robust)