Utilising the CLT Structure in Stochastic Gradient based Sampling : Improved Analysis and Faster Algorithms

Aniket Das
Anant Raj
Conference on Learning Theory 2023 (2023)

Abstract

We consider stochastic approximations of sampling algorithms such as Unadjusted Langevin Algorithm (ULA) and Interacting Particle Dynamics (IPD), using random batches. The noise added by the random batches is near-Gaussian due to the central limit theorem (CLT) while the driving Brownian motion is exactly Gaussian. Using this structure, we show that the error produced by the stochastic approximation can be hidden inside the diffusion process driving the algorithm in order to obtain convergence guarantees. This method also leads to a new algorithm: the covariance corrected random batch method, which corrects for the additional noise from the random batches to give us faster convergence. To summarize our contribution:

(1) We show first non-exploding, KL convergence bounds for SGLD with significantly fewer assumptions and better dimension dependence (improvement from $d^4$ to $d^{1.5}$). We show that covariance corrected SGLD and demonstrate that it enjoys even faster convergence.

(2) For IPD, we analyze covariance corrected random batch methods. Under fewer assumptions, we remove the exponential dependence on the horizon observed in prior works relating to random batch methods.