2006) or as a discretized form of the total variation
(Combettes and Pesquet, 2004; Rudin et al., 1992).
Many approaches to tackle problem (2) rely on
the convexity of g, like NESTA (Becker et al., 2009),
conjugate subgradient methods (Hawe et al., 2011) or
TwIST (Bioucas-Dias and Figueiredo, 2007), just to
mention a few. That is the reason why the `
1
-norm
is most commonly employed. Although this convex
relaxation leads to perfect signal recovery under cer-
tain assumptions, cf. (Donoho and Elad, 2003), it has
been shown in (Chartrand and Staneva, 2008) that in
some cases, the `
p
-norm for 0 ≤ p < 1 severely out-
performs its convex counterpart.
In this work, we do not assume convexity of g but
we require differentiability. For the `
p
-norm, this can
easily be achieved by a suitable smooth approxima-
tion. Here, we propose an approach based on mini-
mizing the unconstrained Lagrangian form of (2) that
is given by
minimize
s
?
∈R
n
f (s
?
) =
1
2
kAs
?
− yk
2
2
+ λg(s
?
). (4)
The Lagrange multiplier λ ∈ R
+
0
weighs between the
sparsity of the solution and its fidelity to the acquired
samples according to λ ∼ ε.
Consider now a sequence of linear inverse prob-
lems whose solutions vary smoothly over time. As
an example, one may think of the denoising of short
video sequences or the reconstruction of compres-
sively sensed magnetic resonance image sequences,
cf. (Lustig et al., 2007). In this work, we propose an
approach to track the solutions of time varying linear
inverse problems. We employ preceding solutions to
predict the current signal’s estimate. To the best of
the authors’ knowledge, this idea has not been pur-
sued so far in the literature. The crucial idea is to use
a discretized Newton flow to track solutions of a time
varying version of (4). We provide three practical up-
date formulas for the tracking problem and conclude
with a proof of concept by applying our approach to
a short synthetic video sequence, where each video
frame is recovered from compressively sampled mea-
surements.
2 TRACKING THE SOLUTIONS
2.1 Problem Statement
Let t 7→ s(t) ∈ R
n
be a C
1
-curve i.e. with continuous
first derivative that represents a time varying signal
s. Moreover, let y(t) = As(t) be the measurements
of s at time t. In this paper, we consider the prob-
lem of reconstructing a sequence of signals
s(t
k
)
k∈N
at consecutive instances of time. Instead of estimat-
ing s(t
k
) by solving the inverse problem based on the
measurements y(t
k
), we investigate in how far the pre-
viously recovered estimates s
?
i
of s(t
i
), i = 1,... , k can
be employed to predict s(t
k+1
) without acquiring new
measurements y(t
k+1
). This prediction step may serve
as an intermediate replacement for this reconstruction
step or it may be employed as an initialization for re-
construction at time t
k
. Note that in our approach, we
assume a fixed measurement matrix A.
Consider the time variant version of the uncon-
strained Lagrangian function
f (s
?
,t) =
1
2
kAs
?
− y(t)k
2
2
+ λg(s
?
). (5)
The minimum of (5) at time t necessarily yields the
gradient
F(s
?
,t) :=
∂
∂s
?
f (s
?
,t) (6)
to be zero. Consequently, we want to find the smooth
curve s
?
(t) such that
F(s
?
(t),t) = 0. (7)
In other words, we want to track the minima of (5)
over time. A discretized Newton flow, which is ex-
plained in the following subsection, will be used for
that purpose.
2.2 Discretized Newton Flow
Homotopy methods are a well-known approach for
solving problem (7). These methods are based on
an associated differential equation whose solutions
track the roots of F. To make the paper self con-
tained, we shortly rederive the discretized Newton
flow for our situation at hand based on (Baumann
et al., 2005). Specifically, we consider the implicit
differential equation
J
F
(s
?
,t)
˙
s
?
+
∂
∂t
F(s
?
,t) = −αF(s
?
,t), (8)
where α > 0 is a free parameter that stabilizes the dy-
namics around the desired solution. Here,
J
F
(s
?
,t) :=
∂
∂s
?
F(s
?
,t) (9)
is the (n × n)-matrix of partial derivatives of F with
respect to s
?
. Under suitable invertibility conditions
on J
F
, we rewrite (8) in explicit form as
˙
s
?
= −J
F
(s
?
,t)
−1
αF(s
?
,t) +
∂
∂t
F(s
?
,t)
. (10)
We discretize (10) at time instances t
k
, for k ∈ N and
assume without loss of generality a fixed stepsize h >
ICPRAM 2012 - International Conference on Pattern Recognition Applications and Methods
254