
Yee Whye Teh
Person information
- affiliation: University of Oxford, UK
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2021
- [i84]Emilien Dupont, Yee Whye Teh, Arnaud Doucet:
Generative Models as Distributions of Functions. CoRR abs/2102.04776 (2021) - 2020
- [j15]Benjamin Bloem-Reddy, Yee Whye Teh:
Probabilistic Symmetries and Invariant Neural Networks. J. Mach. Learn. Res. 21: 90:1-90:61 (2020) - [c120]Adam Foster, Martin Jankowiak, Matthew O'Meara, Yee Whye Teh, Tom Rainforth:
A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments. AISTATS 2020: 2959-2969 - [c119]Giuseppe Di Benedetto, Francois Caron, Yee Whye Teh:
Non-exchangeable feature allocation models with sublinear growth of the feature sizes. AISTATS 2020: 3208-3218 - [c118]Siddhant M. Jayakumar, Wojciech M. Czarnecki, Jacob Menick, Jonathan Schwarz, Jack W. Rae, Simon Osindero, Yee Whye Teh, Tim Harley, Razvan Pascanu:
Multiplicative Interactions and Where to Find Them. ICLR 2020 - [c117]Michalis K. Titsias, Jonathan Schwarz, Alexander G. de G. Matthews, Razvan Pascanu, Yee Whye Teh:
Functional Regularisation for Continual Learning with Gaussian Processes. ICLR 2020 - [c116]Umut Simsekli, Lingjiong Zhu, Yee Whye Teh, Mert Gürbüzbalaban:
Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise. ICML 2020: 8970-8980 - [c115]Joost van Amersfoort, Lewis Smith, Yee Whye Teh, Yarin Gal:
Uncertainty Estimation Using a Single Deep Deterministic Neural Network. ICML 2020: 9690-9700 - [c114]Jin Xu, Jean-Francois Ton, Hyunjik Kim, Adam R. Kosiorek, Yee Whye Teh:
MetaFun: Meta-Learning with Iterative Functional Updates. ICML 2020: 10617-10627 - [c113]Yuan Zhou, Hongseok Yang, Yee Whye Teh, Tom Rainforth:
Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support. ICML 2020: 11534-11545 - [c112]Bobby He, Balaji Lakshminarayanan, Yee Whye Teh:
Bayesian Deep Ensembles via the Neural Tangent Kernel. NeurIPS 2020 - [c111]Juho Lee, Yoonho Lee, Jungtaek Kim, Eunho Yang, Sung Ju Hwang, Yee Whye Teh:
Bootstrapping neural processes. NeurIPS 2020 - [c110]Mrinank Sharma, Sören Mindermann, Jan Markus Brauner, Gavin Leech, Anna B. Stephenson, Tomas Gavenciak, Jan Kulveit, Yee Whye Teh, Leonid Chindelevitch, Yarin Gal:
How Robust are the Estimated Effects of Nonpharmaceutical Interventions against COVID-19? NeurIPS 2020 - [i83]Umut Simsekli, Lingjiong Zhu, Yee Whye Teh, Mert Gürbüzbalaban:
Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise. CoRR abs/2002.05685 (2020) - [i82]Soufiane Hayou, Jean-Francois Ton, Arnaud Doucet, Yee Whye Teh:
Pruning untrained neural networks: Principles and Analysis. CoRR abs/2002.08797 (2020) - [i81]Joost van Amersfoort, Lewis Smith, Yee Whye Teh, Yarin Gal:
Simple and Scalable Epistemic Uncertainty Estimation Using a Single Deep Deterministic Neural Network. CoRR abs/2003.02037 (2020) - [i80]Giuseppe Di Benedetto, François Caron, Yee Whye Teh:
Non-exchangeable feature allocation models with sublinear growth of the feature sizes. CoRR abs/2003.13491 (2020) - [i79]Sheheryar Zaidi, Arber Zela, Thomas Elsken, Chris Holmes, Frank Hutter, Yee Whye Teh:
Neural Ensemble Search for Performant and Calibrated Predictions. CoRR abs/2006.08573 (2020) - [i78]Bobby He, Balaji Lakshminarayanan, Yee Whye Teh:
Bayesian Deep Ensembles via the Neural Tangent Kernel. CoRR abs/2007.05864 (2020) - [i77]Bryn Elesedy, Varun Kanade, Yee Whye Teh:
Lottery Tickets in Linear Models: An Analysis of Iterative Magnitude Pruning. CoRR abs/2007.08243 (2020) - [i76]Mrinank Sharma, Sören Mindermann, Jan Markus Brauner, Gavin Leech, Anna B. Stephenson, Tomas Gavenciak, Jan Kulveit, Yee Whye Teh, Leonid Chindelevitch, Yarin Gal:
On the robustness of effectiveness estimation of nonpharmaceutical interventions against COVID-19 transmission. CoRR abs/2007.13454 (2020) - [i75]Juho Lee, Yoonho Lee, Jungtaek Kim, Eunho Yang, Sung Ju Hwang, Yee Whye Teh:
Bootstrapping Neural Processes. CoRR abs/2008.02956 (2020) - [i74]Alexandre Galashov, Jakub Sygnowski, Guillaume Desjardins, Jan Humplik, Leonard Hasenclever, Rae Jeong, Yee Whye Teh, Nicolas Heess:
Importance Weighted Policy Learning and Adaption. CoRR abs/2009.04875 (2020) - [i73]Dhruva Tirumala, Alexandre Galashov, Hyeonwoo Noh, Leonard Hasenclever, Razvan Pascanu, Jonathan Schwarz, Guillaume Desjardins, Wojciech Marian Czarnecki, Arun Ahuja, Yee Whye Teh, Nicolas Heess:
Behavior Priors for Efficient Reinforcement Learning. CoRR abs/2010.14274 (2020) - [i72]Ari Pakman, Yueqi Wang, Yoonho Lee, Pallab Basu, Juho Lee, Yee Whye Teh, Liam Paninski:
Attentive Clustering Processes. CoRR abs/2010.15727 (2020) - [i71]Peter Holderrieth, Michael Hutchinson, Yee Whye Teh:
Equivariant Conditional Neural Processes. CoRR abs/2011.12916 (2020) - [i70]Michael Hutchinson, Charline Le Lan, Sheheryar Zaidi, Emilien Dupont, Yee Whye Teh, Hyunjik Kim:
LieTransformer: Equivariant self-attention for Lie Groups. CoRR abs/2012.10885 (2020)
2010 – 2019
- 2019
- [c109]Alexandre Galashov, Siddhant M. Jayakumar, Leonard Hasenclever, Dhruva Tirumala, Jonathan Schwarz, Guillaume Desjardins, Wojciech M. Czarnecki, Yee Whye Teh, Razvan Pascanu, Nicolas Heess:
Information asymmetry in KL-regularized RL. ICLR (Poster) 2019 - [c108]Hyunjik Kim, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, S. M. Ali Eslami, Dan Rosenbaum, Oriol Vinyals, Yee Whye Teh:
Attentive Neural Processes. ICLR (Poster) 2019 - [c107]Josh Merel, Leonard Hasenclever, Alexandre Galashov, Arun Ahuja, Vu Pham, Greg Wayne, Yee Whye Teh, Nicolas Heess:
Neural Probabilistic Motor Primitives for Humanoid Control. ICLR (Poster) 2019 - [c106]Eric T. Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Görür, Balaji Lakshminarayanan:
Do Deep Generative Models Know What They Don't Know? ICLR (Poster) 2019 - [c105]Stefan Webb, Tom Rainforth, Yee Whye Teh, M. Pawan Kumar:
A Statistical Approach to Assessing Neural Network Robustness. ICLR (Poster) 2019 - [c104]Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Kosiorek, Seungjin Choi, Yee Whye Teh:
Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks. ICML 2019: 3744-3753 - [c103]Emile Mathieu, Tom Rainforth, N. Siddharth, Yee Whye Teh:
Disentangling Disentanglement in Variational Autoencoders. ICML 2019: 4402-4412 - [c102]Eric T. Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Görür, Balaji Lakshminarayanan:
Hybrid Models with Deep and Invertible Features. ICML 2019: 4723-4732 - [c101]Emilien Dupont, Arnaud Doucet, Yee Whye Teh:
Augmented Neural ODEs. NeurIPS 2019: 3134-3144 - [c100]Dushyant Rao, Francesco Visin, Andrei A. Rusu, Razvan Pascanu, Yee Whye Teh, Raia Hadsell:
Continual Unsupervised Representation Learning. NeurIPS 2019: 7645-7655 - [c99]Shufei Ge, Shijia Wang, Yee Whye Teh, Liangliang Wang, Lloyd T. Elliott:
Random Tessellation Forests. NeurIPS 2019: 9571-9581 - [c98]Emile Mathieu, Charline Le Lan, Chris J. Maddison, Ryota Tomioka, Yee Whye Teh:
Continuous Hierarchical Representations with Poincaré Variational Auto-Encoders. NeurIPS 2019: 12544-12555 - [c97]Adam Foster, Martin Jankowiak, Eli Bingham, Paul Horsfall, Yee Whye Teh, Tom Rainforth, Noah D. Goodman:
Variational Bayesian Optimal Experimental Design. NeurIPS 2019: 14036-14047 - [c96]Adam R. Kosiorek, Sara Sabour, Yee Whye Teh, Geoffrey E. Hinton:
Stacked Capsule Autoencoders. NeurIPS 2019: 15486-15496 - [c95]Tuan Anh Le, Adam R. Kosiorek, N. Siddharth, Yee Whye Teh, Frank Wood:
Revisiting Reweighted Wake-Sleep for Models with Stochastic Control Flow. UAI 2019: 1039-1049 - [i69]Hyunjik Kim, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, S. M. Ali Eslami, Dan Rosenbaum, Oriol Vinyals, Yee Whye Teh:
Attentive Neural Processes. CoRR abs/1901.05761 (2019) - [i68]Emile Mathieu, Charline Le Lan, Chris J. Maddison, Ryota Tomioka, Yee Whye Teh:
Hierarchical Representations with Poincaré Variational Auto-Encoders. CoRR abs/1901.06033 (2019) - [i67]Benjamin Bloem-Reddy, Yee Whye Teh:
Probabilistic symmetry and invariant neural networks. CoRR abs/1901.06082 (2019) - [i66]Michalis K. Titsias, Jonathan Schwarz, Alexander G. de G. Matthews, Razvan Pascanu, Yee Whye Teh:
Functional Regularisation for Continual Learning using Gaussian Processes. CoRR abs/1901.11356 (2019) - [i65]Eric T. Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Görür, Balaji Lakshminarayanan:
Hybrid Models with Deep and Invertible Features. CoRR abs/1902.02767 (2019) - [i64]Adam Foster, Martin Jankowiak, Eli Bingham, Paul Horsfall, Yee Whye Teh, Tom Rainforth, Noah D. Goodman:
Variational Estimators for Bayesian Optimal Experimental Design. CoRR abs/1903.05480 (2019) - [i63]Dhruva Tirumala, Hyeonwoo Noh, Alexandre Galashov, Leonard Hasenclever, Arun Ahuja, Greg Wayne, Razvan Pascanu, Yee Whye Teh, Nicolas Heess:
Exploiting Hierarchy for Learning and Transfer in KL-regularized RL. CoRR abs/1903.07438 (2019) - [i62]Alexandre Galashov, Jonathan Schwarz, Hyunjik Kim, Marta Garnelo, David Saxton, Pushmeet Kohli, S. M. Ali Eslami, Yee Whye Teh:
Meta-Learning surrogate models for sequential decision making. CoRR abs/1903.11907 (2019) - [i61]Emilien Dupont, Arnaud Doucet, Yee Whye Teh:
Augmented Neural ODEs. CoRR abs/1904.01681 (2019) - [i60]Alexandre Galashov, Siddhant M. Jayakumar, Leonard Hasenclever, Dhruva Tirumala, Jonathan Schwarz, Guillaume Desjardins, Wojciech M. Czarnecki, Yee Whye Teh, Razvan Pascanu, Nicolas Heess:
Information asymmetry in KL-regularized RL. CoRR abs/1905.01240 (2019) - [i59]Pedro A. Ortega, Jane X. Wang, Mark Rowland, Tim Genewein, Zeb Kurth-Nelson, Razvan Pascanu, Nicolas Heess, Joel Veness, Alexander Pritzel, Pablo Sprechmann, Siddhant M. Jayakumar, Tom McGrath, Kevin J. Miller, Mohammad Gheshlaghi Azar, Ian Osband, Neil C. Rabinowitz, András György, Silvia Chiappa, Simon Osindero, Yee Whye Teh, Hado van Hasselt, Nando de Freitas, Matthew Botvinick, Shane Legg:
Meta-learning of Sequential Strategies. CoRR abs/1905.03030 (2019) - [i58]Jan Humplik, Alexandre Galashov, Leonard Hasenclever, Pedro A. Ortega, Yee Whye Teh, Nicolas Heess:
Meta reinforcement learning as task inference. CoRR abs/1905.06424 (2019) - [i57]Bradley Gram-Hansen, Christian Schröder de Witt, Tom Rainforth, Philip H. S. Torr, Yee Whye Teh, Atilim Günes Baydin:
Hijacking Malaria Simulators with Probabilistic Programming. CoRR abs/1905.12432 (2019) - [i56]Jean-Francois Ton, Lucian Chan
, Yee Whye Teh, Dino Sejdinovic:
Noise Contrastive Meta-Learning for Conditional Density Estimation using Kernel Mean Embeddings. CoRR abs/1906.02236 (2019) - [i55]Eric T. Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Balaji Lakshminarayanan:
Detecting Out-of-Distribution Inputs to Deep Generative Models Using a Test for Typicality. CoRR abs/1906.02994 (2019) - [i54]Xu He, Jakub Sygnowski, Alexandre Galashov, Andrei A. Rusu, Yee Whye Teh, Razvan Pascanu:
Task Agnostic Continual Learning via Meta Learning. CoRR abs/1906.05201 (2019) - [i53]Shufei Ge, Shijia Wang, Yee Whye Teh, Liangliang Wang, Lloyd T. Elliott:
Random Tessellation Forests. CoRR abs/1906.05440 (2019) - [i52]Adam R. Kosiorek, Sara Sabour, Yee Whye Teh, Geoffrey E. Hinton:
Stacked Capsule Autoencoders. CoRR abs/1906.06818 (2019) - [i51]Juho Lee, Yoonho Lee, Yee Whye Teh:
Deep Amortized Clustering. CoRR abs/1909.13433 (2019) - [i50]Saeid Naderiparizi, Adam Scibior, Andreas Munk, Mehrdad Ghadiri, Atilim Günes Baydin, Bradley Gram-Hansen, Christian Schröder de Witt, Robert Zinkov, Philip H. S. Torr, Tom Rainforth, Yee Whye Teh, Frank Wood:
Amortized Rejection Sampling in Universal Probabilistic Programming. CoRR abs/1910.09056 (2019) - [i49]Yuan Zhou, Hongseok Yang, Yee Whye Teh, Tom Rainforth:
Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support. CoRR abs/1910.13324 (2019) - [i48]Dushyant Rao, Francesco Visin, Andrei A. Rusu, Yee Whye Teh, Razvan Pascanu, Raia Hadsell:
Continual Unsupervised Representation Learning. CoRR abs/1910.14481 (2019) - [i47]Adam Foster, Martin Jankowiak, Matthew O'Meara, Yee Whye Teh, Tom Rainforth:
A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments. CoRR abs/1911.00294 (2019) - [i46]Jin Xu, Jean-Francois Ton, Hyunjik Kim, Adam R. Kosiorek, Yee Whye Teh:
MetaFun: Meta-Learning with Iterative Functional Updates. CoRR abs/1912.02738 (2019) - 2018
- [c94]Mark Rowland, Marc G. Bellemare, Will Dabney, Rémi Munos, Yee Whye Teh:
An Analysis of Categorical Distributional Reinforcement Learning. AISTATS 2018: 29-37 - [c93]Hyunjik Kim, Yee Whye Teh:
Scaling up the Automatic Statistician: Scalable Structure Discovery using Gaussian Processes. AISTATS 2018: 575-584 - [c92]Wojciech Marian Czarnecki, Siddhant M. Jayakumar, Max Jaderberg, Leonard Hasenclever, Yee Whye Teh, Nicolas Heess, Simon Osindero, Razvan Pascanu:
Mix & Match Agent Curricula for Reinforcement Learning. ICML 2018: 1095-1103 - [c91]Marta Garnelo, Dan Rosenbaum, Christopher Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo Jimenez Rezende, S. M. Ali Eslami:
Conditional Neural Processes. ICML 2018: 1690-1699 - [c90]Tom Rainforth, Adam R. Kosiorek, Tuan Anh Le, Chris J. Maddison, Maximilian Igl, Frank Wood, Yee Whye Teh:
Tighter Variational Bounds are Not Necessarily Better. ICML 2018: 4274-4282 - [c89]Jonathan Schwarz, Wojciech Czarnecki, Jelena Luketina, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, Raia Hadsell:
Progress & Compress: A scalable framework for continual learning. ICML 2018: 4535-4544 - [c88]Yee Whye Teh:
On Big Data Learning for Small Data Problems. KDD 2018: 3 - [c87]Xenia Miscouridou, Francois Caron, Yee Whye Teh:
Modelling sparsity, heterogeneity, reciprocity and community structure in temporal interaction data. NeurIPS 2018: 2349-2358 - [c86]Stefan Webb, Adam Golinski, Robert Zinkov, Siddharth Narayanaswamy, Tom Rainforth, Yee Whye Teh, Frank Wood:
Faithful Inversion of Generative Models for Effective Amortized Inference. NeurIPS 2018: 3074-3084 - [c85]Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh:
Causal Inference via Kernel Deviance Measures. NeurIPS 2018: 6986-6994 - [c84]Jianfei Chen, Jun Zhu, Yee Whye Teh, Tong Zhang:
Stochastic Expectation Maximization with Variance Reduction. NeurIPS 2018: 7978-7988 - [c83]Adam R. Kosiorek, Hyunjik Kim, Yee Whye Teh, Ingmar Posner:
Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects. NeurIPS 2018: 8615-8625 - [c82]Benjamin Bloem-Reddy, Adam Foster, Emile Mathieu, Yee Whye Teh:
Sampling and Inference for Beta Neutral-to-the-Left Models of Sparse Networks. UAI 2018: 477-486 - [i45]Tom Rainforth, Adam R. Kosiorek, Tuan Anh Le, Chris J. Maddison, Maximilian Igl, Frank Wood, Yee Whye Teh:
Tighter Variational Bounds are Not Necessarily Better. CoRR abs/1802.04537 (2018) - [i44]Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh:
Causal Inference via Kernel Deviance Measures. CoRR abs/1804.04622 (2018) - [i43]Jonathan Schwarz, Jelena Luketina, Wojciech M. Czarnecki, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, Raia Hadsell:
Progress & Compress: A scalable framework for continual learning. CoRR abs/1805.06370 (2018) - [i42]Tuan Anh Le, Adam R. Kosiorek, N. Siddharth, Yee Whye Teh, Frank Wood:
Revisiting Reweighted Wake-Sleep. CoRR abs/1805.10469 (2018) - [i41]Wojciech Marian Czarnecki, Siddhant M. Jayakumar, Max Jaderberg, Leonard Hasenclever, Yee Whye Teh, Simon Osindero, Nicolas Heess, Razvan Pascanu:
Mix&Match - Agent Curricula for Reinforcement Learning. CoRR abs/1806.01780 (2018) - [i40]Adam R. Kosiorek, Hyunjik Kim, Ingmar Posner, Yee Whye Teh:
Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects. CoRR abs/1806.01794 (2018) - [i39]Jin Xu, Yee Whye Teh:
Controllable Semantic Image Inpainting. CoRR abs/1806.05953 (2018) - [i38]Marta Garnelo, Dan Rosenbaum, Chris J. Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo J. Rezende, S. M. Ali Eslami:
Conditional Neural Processes. CoRR abs/1807.01613 (2018) - [i37]Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami, Yee Whye Teh:
Neural Processes. CoRR abs/1807.01622 (2018) - [i36]Benjamin Bloem-Reddy, Adam Foster, Emile Mathieu, Yee Whye Teh:
Sampling and Inference for Beta Neutral-to-the-Left Models of Sparse Networks. CoRR abs/1807.03113 (2018) - [i35]Chris J. Maddison, Daniel Paulin, Yee Whye Teh, Brendan O'Donoghue, Arnaud Doucet:
Hamiltonian Descent Methods. CoRR abs/1809.05042 (2018) - [i34]Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Kosiorek, Seungjin Choi, Yee Whye Teh:
Set Transformer. CoRR abs/1810.00825 (2018) - [i33]Eric T. Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Görür, Balaji Lakshminarayanan:
Do Deep Generative Models Know What They Don't Know? CoRR abs/1810.09136 (2018) - [i32]Xiaoyu Lu, Tom Rainforth, Yuan Zhou, Jan-Willem van de Meent, Yee Whye Teh:
On Exploration, Exploitation and Learning in Adaptive Importance Sampling. CoRR abs/1810.13296 (2018) - [i31]Stefan Webb, Tom Rainforth, Yee Whye Teh, M. Pawan Kumar:
A Statistical Approach to Assessing Neural Network Robustness. CoRR abs/1811.07209 (2018) - [i30]Josh Merel, Leonard Hasenclever, Alexandre Galashov, Arun Ahuja, Vu Pham, Greg Wayne, Yee Whye Teh, Nicolas Heess:
Neural probabilistic motor primitives for humanoid control. CoRR abs/1811.11711 (2018) - [i29]Emile Mathieu, Tom Rainforth, Siddharth Narayanaswamy, Yee Whye Teh:
Disentangling Disentanglement. CoRR abs/1812.02833 (2018) - 2017
- [j14]Leonard Hasenclever, Stefan Webb, Thibaut Lienart, Sebastian J. Vollmer, Balaji Lakshminarayanan, Charles Blundell, Yee Whye Teh:
Distributed Bayesian Learning with Stochastic Natural Gradient Expectation Propagation and the Posterior Server. J. Mach. Learn. Res. 18: 106:1-106:37 (2017) - [j13]Valerio Perrone, Paul A. Jenkins, Dario Spanò, Yee Whye Teh:
Poisson Random Fields for Dynamic Feature Models. J. Mach. Learn. Res. 18: 127:1-127:45 (2017) - [c81]Seth R. Flaxman, Yee Whye Teh, Dino Sejdinovic:
Poisson intensity estimation with reproducing kernels. AISTATS 2017: 270-279 - [c80]Xiaoyu Lu, Valerio Perrone, Leonard Hasenclever, Yee Whye Teh, Sebastian J. Vollmer:
Relativistic Monte Carlo. AISTATS 2017: 1236-1245 - [c79]Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Arnaud Doucet, Andriy Mnih, Yee Whye Teh:
Particle Value Functions. ICLR (Workshop) 2017 - [c78]Chris J. Maddison, Andriy Mnih, Yee Whye Teh:
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. ICLR (Poster) 2017 - [c77]Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh:
Deep Kernel Machines via the Kernel Reparametrization Trick. ICLR (Workshop) 2017 - [c76]Yee Whye Teh, Victor Bapst, Wojciech M. Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, Razvan Pascanu:
Distral: Robust multitask reinforcement learning. NIPS 2017: 4496-4506 - [c75]Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, Andriy Mnih, Arnaud Doucet, Yee Whye Teh:
Filtering Variational Objectives. NIPS 2017: 6573-6583 - [e2]Doina Precup, Yee Whye Teh:
Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017. Proceedings of Machine Learning Research 70, PMLR 2017 [contents] - [r4]Peter Orbanz, Yee Whye Teh:
Bayesian Nonparametric Models. Encyclopedia of Machine Learning and Data Mining 2017: 107-116 - [r3]Yee Whye Teh:
Dirichlet Process. Encyclopedia of Machine Learning and Data Mining 2017: 361-370 - [i28]Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Arnaud Doucet, Andriy Mnih, Yee Whye Teh:
Particle Value Functions. CoRR abs/1703.05820 (2017) - [i27]Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, Andriy Mnih, Arnaud Doucet, Yee Whye Teh:
Filtering Variational Objectives. CoRR abs/1705.09279 (2017) - [i26]Hyunjik Kim, Yee Whye Teh:
Scaling up the Automatic Statistician: Scalable Structure Discovery using Gaussian Processes. CoRR abs/1706.02524 (2017) - [i25]Yee Whye Teh, Victor Bapst, Wojciech Marian Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, Razvan Pascanu:
Distral: Robust Multitask Reinforcement Learning. CoRR abs/1707.04175 (2017) - [i24]Stefan Webb, Adam Golinski, Robert Zinkov, N. Siddharth, Yee Whye Teh, Frank D. Wood:
Faithful Model Inversion Substantially Improves Auto-encoding Variational Inference. CoRR abs/1712.00287 (2017) - 2016
- [j12]Yee Whye Teh, Alexandre H. Thiery, Sebastian J. Vollmer:
Consistency and Fluctuations For Stochastic Gradient Langevin Dynamics. J. Mach. Learn. Res. 17: 7:1-7:33 (2016) - [j11]Sebastian J. Vollmer, Konstantinos C. Zygalakis, Yee Whye Teh:
Exploration of the (Non-)Asymptotic Bias and Variance of Stochastic Gradient Langevin Dynamics. J. Mach. Learn. Res. 17: 159:1-159:48 (2016) - [c74]Balaji Lakshminarayanan, Daniel M. Roy, Yee Whye Teh:
Mondrian Forests for Large-Scale Regression when Uncertainty Matters. AISTATS 2016: 1478-1487 - [c73]Hyunjik Kim, Yee Whye Teh:
Scalable Structure Discovery in Regression using Gaussian Processes. AutoML@ICML 2016: 31-40 - [c72]Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh:
DR-ABC: Approximate Bayesian Computation with Kernel-Based Distribution Regression. ICML 2016: 1482-1491 - [c71]Tamara Fernandez, Nicolas Rivera, Yee Whye Teh:
Gaussian Processes for Survival Analysis. NIPS 2016: 5015-5023 - [c70]Matej Balog, Balaji Lakshminarayanan, Zoubin Ghahramani, Daniel M. Roy, Yee Whye Teh:
The Mondrian Kernel. UAI 2016 - [i23]Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh:
DR-ABC: Approximate Bayesian Computation with Kernel-Based Distribution Regression. CoRR abs/1602.04805 (2016) - [i22]Dorota Glowacka, Yee Whye Teh, John Shawe-Taylor:
Image Retrieval with a Bayesian Model of Relevance Feedback. CoRR abs/1603.09522 (2016) - [i21]Hyunjik Kim, Xiaoyu Lu, Seth R. Flaxman, Yee Whye Teh:
Tucker Gaussian Process for Regression and Collaborative Filtering. CoRR abs/1605.07025 (2016) - [i20]Chris J. Maddison, Andriy Mnih, Yee Whye Teh:
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. CoRR abs/1611.00712 (2016) - 2015
- [j10]Pablo G. Moreno, Antonio Artés-Rodríguez, Yee Whye Teh, Fernando Pérez-Cruz:
Bayesian nonparametric crowdsourcing. J. Mach. Learn. Res. 16: 1607-1627 (2015) - [j9]Ryan P. Adams, Emily B. Fox, Erik B. Sudderth
, Yee Whye Teh:
Guest Editors' Introduction to the Special Issue on Bayesian Nonparametrics. IEEE Trans. Pattern Anal. Mach. Intell. 37(2): 209-211 (2015) - [j8]Stefano Favaro, Maria Lomeli, Yee Whye Teh:
On a class of σ-stable Poisson-Kingman models and an effective marginalized sampler. Stat. Comput. 25(1): 67-78 (2015) - [c69]Balaji Lakshminarayanan, Daniel M. Roy, Yee Whye Teh:
Particle Gibbs for Bayesian Additive Regression Trees. AISTATS 2015 - [c68]Maria Lomeli, Stefano Favaro, Yee Whye Teh:
A hybrid sampler for Poisson-Kingman mixture models. NIPS 2015: 2161-2169 - [c67]Thibaut Lienart, Yee Whye Teh, Arnaud Doucet:
Expectation Particle Belief Propagation. NIPS 2015: 3609-3617 - [i19]