Bias-Reduction in Variational Regularization

Mercredi 29 mars 10:30-11:30 - Julian Rasch - Munster University

Bias-Reduction in Variational Regularization

Résumé : Variational methods suffer from inevitable bias. The simplest example
is l^1 -regularization, which leads to sparse solutions, but however affects
the quantitative peak values. We present a two-step method to reduce
bias. After solving the standard variational problem, the key idea is to
add a consecutive debiasing step minimizing the data fidelity on an appro-
priate space, the so-called model subspace. Here, these spaces are defined
by Bregman distances or infimal convolutions thereof, using the subgra-
dient appearing in the optimality condition of the variational method. In
particular, they lead to a decomposition of the overall bias into two parts,
model and method bias, of which we shall tackle the latter. We provide
numerous examples and experiments to illustrate both the performance
and the statistical behavior of the method.

Pour en savoir plus sur cet événement, consultez l'article Séminaires SPOC