Department of Mathematics - Special Colloquium - Fantastic Diffusion Models and Where to Apply Them
Diffusion models, which convert noise into new data instances by learning to reverse a Markov diffusion process, have become a cornerstone in generative AI. While their practical power has now been widely recognized, the theoretical underpinnings remain far from mature. We first develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models, highlighting fast convergence under mild data assumptions. Motivated by this theory, we then advocate diffusion models as an expressive data prior in solving ill-posed inverse problems, and introduce a provably robust plug-and-play method (DPnP) to perform posterior sampling. DPnP alternatively calls two samplers, a proximal consistency sampler solely based on the forward model, and a denoising diffusion sampler solely based on the score functions of the data prior. Along the way, applications in materials science will be demonstrated to illustrate the promise of diffusion models in scientific applications.