Looking forward to NeurIPS 2019

Visiting NeurIPS 2019? Come and meet our team

back to our blogs

Looking forward to NeurIPS 2019

Visiting NeurIPS 2019? Come and meet our team

We’re excited to once again be participating at NeurIPS 2019 between December 8th and 14th at the Vancouver Convention Center (VCC), Canada. NeurIPS is the global gathering for the artificial intelligence and machine learning community, and the world’s largest AI conference (at least, according to Wikipedia!).

This year we’re bringing a team of machine learning engineers and researchers armed with papers and posters and they’ll also be hosting workshops and giving demos on our stand (booth 11). Please drop by for a visit. We’d love to have a chat and answer any questions about what we do. We’re also sponsoring the 2nd Symposium on Advances in Approximate Bayesian Inference (AABI) on December 8th, just down the road from the VCC at the Pan Pacific Hotel, where our director of research Dr James Hensman will be a panellist in Session 4.

It’s no secret we’re leading proponents of Gaussian Processes (GPs), which can solve classification, regression and other problems with a fraction of the data required (and time taken) compared to deep neural nets. Plus, GPs elegantly compute the uncertainty in their resulting predictions.

This year, we’ve taken great strides to make GPs more computationally efficient. At this year’s International Conference on Machine Learning in California, we were awarded the ICML 2019 Best Paper Award for mathematically proving that the amount of computation needed for training GPs on large datasets is much less than implied by previous results. This indicates that there are no theoretical obstacles to getting GPs to work well and fast on large datasets. We’re also putting significant effort into investigating the banded structure of precision matrices, in the expectation that banded matrix operations will make complex models easier to build and cheaper to run.

We’re always looking to apply our theoretical expertise to real-world problems. A particular focus this year has been on optimising supply chains for retailers and manufacturers. We’re developing algorithms that can control large stochastic networks to reduce storage costs, maximise resource utilisation and enhance throughput.

Meanwhile, we continue to develop and support the popular GPflow open source probabilistic modelling toolkit on behalf of the community. The team recently released version 2.0 rc1, with TensorFlow 2.0 compatibility and numerous other performance, stability and usability improvements.

Our papers and workshops

We’ve had four papers accepted at the conference this year, and our researchers will be available during the poster sessions (or at any time on our stand) to further explain our work or answer any questions you may have.

In A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment, Felix Leibfried, Sergio Pascual Diaz and Jordi Grau-Moya show how Reinforcement Learning (RL) agents can be motivated to explore their environment in the absence of early rewards. In Pseudo-Extended Markov chain Monte Carlo (MCMC), James Hensman and co-authors explain how MCMC can yield samples from multimodal posterior distributions more efficiently than a popular alternative algorithm, the Hamiltonian Monte Carlo (HMC).

Meanwhile, Mark van der Wilk has made important contributions to two further papers. Scalable Bayesian Dynamic Covariance Modeling with Variational Wishart and Inverse Wishart Processes looks into modelling time-varying covariance matrices and was published with a fellow alumnus of the University of Cambridge Machine Learning Group, where Mark did his PhD with our chief scientist and chairman professor Carl Rasmussen.

Bayesian Layers: A Module for Neural Network Uncertainty presents a unified way of performing Bayesian inference over functions. This collaboration with Google Brain presents a software interface that allows Bayesian neural networks and GPs to be specified and used in the same way in deep architectures. While deep GPs still require further investigation, this does show that the approaches have significant commonalities and that GPs may also find a place in the deep learning community.

Our researchers have written further papers for various workshops, and these will be presented at the Learning Transferable Skills and Optimization Foundations of Reinforcement Learning workshops.

As well as writing papers, several of our researchers are high-scoring NeurIPs reviewers themselves. This is time-consuming work, vital for making NeurIPS a success, and we’re proud of their contribution.

Interested in joining us on the next stage of our journey? Come and talk to the team at the show! (Alternatively, visit https://www.prowler.io/careers.)

For more news, information and opinion, follow us on Twitter and LinkedIn.

Join us to make AI that will change the world

join our team