Doubly Stochastic Variational Inference for Deep Gaussian Processes

back to our research

Doubly Stochastic Variational Inference for Deep Gaussian Processes

4-9 December, Long Beach, California, USA

Advances in Neural Information Processing Systems 30 (NIPS 2017) (spotlight presentation)

Authors: Hugh Salimbeni ( and Imperial College), Marc Deisenroth ( and Imperial College)

Abstract: Gaussian processes (GPs) are a good choice for function approximation as they are flexible, robust to over-fitting, and provide well-calibrated predictive uncertainty. Deep Gaussian processes (DGPs) are multi-layer generalisations of GPs, but inference in these models has proved challenging. Existing approaches to inference in DGP models assume approximate posteriors that force independence between the layers, and do not work well in practice. We present a doubly stochastic variational inference algorithm, which does not force independence between layers. With our method of inference we demonstrate that a DGP model can be used effectively on data ranging in size from hundreds to a billion points. We provide strong empirical evidence that our inference scheme for DGPs works well in practice in both classification and regression.

Deep Learning

Gaussian Processes


Probabilistic Modelling

See paper