Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives

Dieterich Lawson
Shixiang Gu
Christopher Maddison
ICLR (2019)

Abstract

Deep latent variable models have become a popular model choice due to the
scalable learning algorithms introduced by (Kingma & Welling, 2013; Rezende
et al., 2014). These approaches maximize a variational lower bound on the intractable log likelihood of the observed data. Burda et al. (2015) introduced a
multi-sample variational bound, IWAE, that is at least as tight as the standard
variational lower bound and becomes increasingly tight as the number of samples
increases. Counterintuitively, the typical inference network gradient estimator for
the IWAE bound performs poorly as the number of samples increases (Rainforth
et al., 2018; Le et al., 2018). Roeder et al. (2017) propose an improved gradient
estimator, however, are unable to show it is unbiased. We show that it is in fact biased and that the bias can be estimated efficiently with a second application of the
reparameterization trick. The doubly reparameterized gradient (DReG) estimator does not suffer as the number of samples increases, resolving the previously
raised issues. The same idea can be used to improve many recently introduced
training techniques for latent variable models. In particular, we show that this
estimator reduces the variance of the IWAE gradient, the reweighted wake-sleep
update (RWS) (Bornschein & Bengio, 2014), and the jackknife variational inference (JVI) gradient (Nowozin, 2018). Finally, we show that this computationally
efficient, unbiased drop-in gradient estimator translates to improved performance
for all three objectives on several modeling tasks.

Research Areas