Invited SpeakersProfile Details

PETER RICHTARIK Associate Professor of Optimization at the University of Edinburgh


Peter Richtarik is an Associate Professor of Computer Science and Mathematics at KAUST and an Associate Professor of Mathematics at the University of Edinburgh. He is an EPSRC Fellow in Mathematical Sciences, Fellow of the Alan Turing Institute, and is affiliated with the Visual Computing Center and the Extreme Computing Research Center at KAUST. Dr. Richtarik received his PhD from Cornell University in 2007, and then worked as a Postdoctoral Fellow in Louvain, Belgium, before joining Edinburgh in 2009, and KAUST in 2017. Dr. Richtarik's research interests lie at the intersection of mathematics, computer science, machine learning, optimization, numerical linear algebra, high performance computing and applied probability. Through his recent work on randomized decomposition algorithms (such as randomized coordinate descent methods, stochastic gradient descent methods and their numerous extensions, improvements and variants), he has contributed to the foundations of the emerging field of big data optimization, randomized numerical linear algebra, and stochastic methods for empirical risk minimization. Several of his papers attracted international awards, including the SIAM SIGEST Best Paper Award, the IMA Leslie Fox Prize (2nd prize, twice), and the INFORMS Computing Society Best Student Paper Award (sole runner up). He is the founder and organizer of the Optimization and Big Data workshop series.​

All sessions by PETER RICHTARIK

  • Day 1Monday, April 10th
3:30 pm

Faster PET Reconstruction with a Stochastic Primal-Dual Hybrid Gradient Method

In this talk we revisit the problem of Positron Emission Tomography (PET) reconstruction with non-smooth and convex priors. As the data fidelity term in PET is the Poisson likelihood, there are not many algorithms that can solve this problem. A very popular choice for solving it is the Primal-Dual Hybrid Gradient (PDHG) method proposed by Chambolle and Pock. While this algorithm works well for small to medium size problems in PET, in the case when the data size becomes large, computational issues arise, which is due to the fact that the system matrix for clinical PET scanners is very large and cannot be stored in the memory of most computers. Instead, an expensive algorithm to compute matrix-vector products has to be employed.

In this work we circumvent this issue by marrying the PDHG method with a randomized dual decomposition strategy (somewhat reminiscent of ART, Kaczmarz, and OSEM, which operate in the primal). PDHG arises as a special (and suboptimal) case of our more general method. We prove that our algorithm converges for a wide range of random samplings (e.g., uniform sampling, importance sampling, parallel samplings, ...), enabling us to implement various decomposition rules which can be chosen to fit the particular computing environment in use. We prove rigorous iteration complexity bounds for standard and accelerated variants of our method. Numerical examples show that the new method is much faster than PDHG.

This is joint work with Antonin Chambolle (Ecole Polytechnique), Matthias Ehrhardt (Cambridge), and Carola-Bibiane Schoenlieb (Cambridge).

KAUST 15:30 - 16:00 Details