
Jascha Sohl-Dickstein
Person information
- affiliation: Google Brain, Mountain View, CA, USA
- affiliation (PhD 2012): UC Berkeley, Redwood Center for Theoretical Neuroscience, CA, USA
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2021
- [i62]Luke Metz, C. Daniel Freeman, Niru Maheswaranathan, Jascha Sohl-Dickstein:
Training Learned Optimizers with Randomly Initialized Learned Optimizers. CoRR abs/2101.07367 (2021) - 2020
- [c43]Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A. Alemi, Jascha Sohl-Dickstein, Samuel S. Schoenholz:
Neural Tangents: Fast and Easy Infinite Neural Networks in Python. ICLR 2020 - [c42]Jiri Hron, Yasaman Bahri, Jascha Sohl-Dickstein, Roman Novak:
Infinite attention: NNGP and NTK for deep attention networks. ICML 2020: 4376-4386 - [c41]Tong Che, Ruixiang Zhang, Jascha Sohl-Dickstein, Hugo Larochelle, Liam Paull, Yuan Cao, Yoshua Bengio:
Your GAN is Secretly an Energy-based Model and You Should Use Discriminator Driven Latent Sampling. NeurIPS 2020 - [c40]Jaehoon Lee, Samuel S. Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak, Jascha Sohl-Dickstein:
Finite Versus Infinite Neural Networks: an Empirical Study. NeurIPS 2020 - [i61]Jascha Sohl-Dickstein, Roman Novak, Samuel S. Schoenholz, Jaehoon Lee:
On the infinite width limit of neural networks with a standard parameterization. CoRR abs/2001.07301 (2020) - [i60]Luke Metz, Niru Maheswaranathan, Ruoxi Sun, C. Daniel Freeman, Ben Poole, Jascha Sohl-Dickstein:
Using a thousand optimization tasks to learn hyperparameter search strategies. CoRR abs/2002.11887 (2020) - [i59]Aitor Lewkowycz, Yasaman Bahri, Ethan Dyer, Jascha Sohl-Dickstein, Guy Gur-Ari:
The large learning rate phase of deep learning: the catapult mechanism. CoRR abs/2003.02218 (2020) - [i58]Tong Che, Ruixiang Zhang, Jascha Sohl-Dickstein, Hugo Larochelle, Liam Paull, Yuan Cao, Yoshua Bengio:
Your GAN is Secretly an Energy-based Model and You Should use Discriminator Driven Latent Sampling. CoRR abs/2003.06060 (2020) - [i57]Jiri Hron, Yasaman Bahri, Jascha Sohl-Dickstein, Roman Novak:
Infinite attention: NNGP and NTK for deep attention networks. CoRR abs/2006.10540 (2020) - [i56]Jiri Hron, Yasaman Bahri, Roman Novak, Jeffrey Pennington, Jascha Sohl-Dickstein:
Exact posterior distributions of wide Bayesian neural networks. CoRR abs/2006.10541 (2020) - [i55]Jascha Sohl-Dickstein, Peter Battaglino, Michael Robert DeWeese:
A new method for parameter estimation in probabilistic models: Minimum probability flow. CoRR abs/2007.09240 (2020) - [i54]Jaehoon Lee, Samuel S. Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak, Jascha Sohl-Dickstein:
Finite Versus Infinite Neural Networks: an Empirical Study. CoRR abs/2007.15801 (2020) - [i53]Neha S. Wadia, Daniel Duckworth, Samuel S. Schoenholz, Ethan Dyer, Jascha Sohl-Dickstein:
Whitening and second order optimization both destroy information about the dataset, and can make generalization impossible. CoRR abs/2008.07545 (2020) - [i52]Luke Metz, Niru Maheswaranathan, C. Daniel Freeman, Ben Poole, Jascha Sohl-Dickstein:
Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves. CoRR abs/2009.11243 (2020) - [i51]Vinay Rao, Jascha Sohl-Dickstein:
Is Batch Norm unique? An empirical investigation and prescription to emulate the best properties of common normalizers without batch dependence. CoRR abs/2010.10687 (2020) - [i50]Niru Maheswaranathan, David Sussillo, Luke Metz, Ruoxi Sun, Jascha Sohl-Dickstein:
Reverse engineering learned optimizers reveals known and novel mechanisms. CoRR abs/2011.02159 (2020) - [i49]Daniel S. Park, Jaehoon Lee, Daiyi Peng, Yuan Cao, Jascha Sohl-Dickstein:
Towards NNGP-guided Neural Architecture Search. CoRR abs/2011.06006 (2020) - [i48]Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole:
Score-Based Generative Modeling through Stochastic Differential Equations. CoRR abs/2011.13456 (2020) - [i47]Michael Laskin, Luke Metz, Seth Nabarrao, Mark Saroufim, Badreddine Noune, Carlo Luschi, Jascha Sohl-Dickstein, Pieter Abbeel:
Parallel Training of Deep Networks with Local Updates. CoRR abs/2012.03837 (2020)
2010 – 2019
- 2019
- [j4]Christopher J. Shallue, Jaehoon Lee, Joseph M. Antognini, Jascha Sohl-Dickstein, Roy Frostig, George E. Dahl:
Measuring the Effects of Data Parallelism on Neural Network Training. J. Mach. Learn. Res. 20: 112:1-112:49 (2019) - [c39]Laurent Dinh, Jascha Sohl-Dickstein, Razvan Pascanu, Hugo Larochelle:
A RAD approach to deep mixture models. DGS@ICLR 2019 - [c38]Gamaleldin F. Elsayed, Ian J. Goodfellow, Jascha Sohl-Dickstein:
Adversarial Reprogramming of Neural Networks. ICLR (Poster) 2019 - [c37]Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein:
Meta-Learning Update Rules for Unsupervised Representation Learning. ICLR 2019 - [c36]Roman Novak, Lechao Xiao, Yasaman Bahri, Jaehoon Lee, Greg Yang, Jiri Hron, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein:
Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes. ICLR (Poster) 2019 - [c35]Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha Sohl-Dickstein, Samuel S. Schoenholz:
A Mean Field Theory of Batch Normalization. ICLR (Poster) 2019 - [c34]Niru Maheswaranathan, Luke Metz, George Tucker, Dami Choi, Jascha Sohl-Dickstein:
Guided evolutionary strategies: augmenting random search with surrogate gradients. ICML 2019: 4264-4273 - [c33]Luke Metz, Niru Maheswaranathan, Jeremy Nixon, C. Daniel Freeman, Jascha Sohl-Dickstein:
Understanding and correcting pathologies in the training of learned optimizers. ICML 2019: 4556-4565 - [c32]Daniel S. Park, Jascha Sohl-Dickstein, Quoc V. Le, Samuel L. Smith:
The Effect of Network Width on Stochastic Gradient Descent and Generalization: an Empirical Study. ICML 2019: 5042-5051 - [c31]Mahdi Karami, Dale Schuurmans, Jascha Sohl-Dickstein, Laurent Dinh, Daniel Duckworth:
Invertible Convolutional Flow. NeurIPS 2019: 5636-5646 - [c30]Jaehoon Lee, Lechao Xiao, Samuel S. Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, Jeffrey Pennington:
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent. NeurIPS 2019: 8570-8581 - [i46]Jascha Sohl-Dickstein, Kenji Kawaguchi:
Eliminating all bad Local Minima from Loss Landscapes without even adding an Extra Unit. CoRR abs/1901.03909 (2019) - [i45]Jaehoon Lee, Lechao Xiao, Samuel S. Schoenholz, Yasaman Bahri, Jascha Sohl-Dickstein, Jeffrey Pennington:
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent. CoRR abs/1902.06720 (2019) - [i44]Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha Sohl-Dickstein, Samuel S. Schoenholz:
A Mean Field Theory of Batch Normalization. CoRR abs/1902.08129 (2019) - [i43]Laurent Dinh, Jascha Sohl-Dickstein, Razvan Pascanu, Hugo Larochelle:
A RAD approach to deep mixture models. CoRR abs/1903.07714 (2019) - [i42]Daniel S. Park, Jascha Sohl-Dickstein, Quoc V. Le, Samuel L. Smith:
The Effect of Network Width on Stochastic Gradient Descent and Generalization: an Empirical Study. CoRR abs/1905.03776 (2019) - [i41]Luke Metz, Niru Maheswaranathan, Jonathon Shlens, Jascha Sohl-Dickstein, Ekin D. Cubuk:
Using learned optimizers to make models robust to input noise. CoRR abs/1906.03367 (2019) - [i40]Stephan Hoyer, Jascha Sohl-Dickstein, Sam Greydanus
:
Neural reparameterization improves structural optimization. CoRR abs/1909.04240 (2019) - [i39]Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A. Alemi, Jascha Sohl-Dickstein, Samuel S. Schoenholz:
Neural Tangents: Fast and Easy Infinite Neural Networks in Python. CoRR abs/1912.02803 (2019) - 2018
- [c29]Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein:
Deep Neural Networks as Gaussian Processes. ICLR (Poster) 2018 - [c28]Daniel Levy, Matthew D. Hoffman, Jascha Sohl-Dickstein:
Generalizing Hamiltonian Monte Carlo with Neural Networks. ICLR (Poster) 2018 - [c27]Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein:
Learning to Learn Without Labels. ICLR (Workshop) 2018 - [c26]Roman Novak, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein:
Sensitivity and Generalization in Neural Networks: an Empirical Study. ICLR (Poster) 2018 - [c25]Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel S. Schoenholz, Jeffrey Pennington:
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10, 000-Layer Vanilla Convolutional Neural Networks. ICML 2018: 5389-5398 - [c24]Gamaleldin F. Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alexey Kurakin, Ian J. Goodfellow, Jascha Sohl-Dickstein:
Adversarial Examples that Fool both Computer Vision and Time-Limited Humans. NeurIPS 2018: 3914-3924 - [c23]Joseph M. Antognini, Jascha Sohl-Dickstein:
PCA of high dimensional random walks with comparison to neural network training. NeurIPS 2018: 10328-10337 - [i38]Gamaleldin F. Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alex Kurakin, Ian J. Goodfellow, Jascha Sohl-Dickstein:
Adversarial Examples that Fool both Human and Computer Vision. CoRR abs/1802.08195 (2018) - [i37]Roman Novak, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein:
Sensitivity and Generalization in Neural Networks: an Empirical Study. CoRR abs/1802.08760 (2018) - [i36]Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein:
Learning Unsupervised Learning Rules. CoRR abs/1804.00222 (2018) - [i35]Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel S. Schoenholz, Jeffrey Pennington:
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10, 000-Layer Vanilla Convolutional Neural Networks. CoRR abs/1806.05393 (2018) - [i34]Joseph M. Antognini, Jascha Sohl-Dickstein:
PCA of high dimensional random walks with comparison to neural network training. CoRR abs/1806.08805 (2018) - [i33]Samuel L. Smith, Daniel Duckworth, Quoc V. Le, Jascha Sohl-Dickstein:
Stochastic natural gradient descent draws posterior samples in function space. CoRR abs/1806.09597 (2018) - [i32]Niru Maheswaranathan, Luke Metz, George Tucker, Jascha Sohl-Dickstein:
Guided evolutionary strategies: escaping the curse of dimensionality in random search. CoRR abs/1806.10230 (2018) - [i31]Gamaleldin F. Elsayed, Ian J. Goodfellow, Jascha Sohl-Dickstein:
Adversarial Reprogramming of Neural Networks. CoRR abs/1806.11146 (2018) - [i30]Roman Novak, Lechao Xiao, Jaehoon Lee, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein:
Bayesian Convolutional Neural Networks with Many Channels are Gaussian Processes. CoRR abs/1810.05148 (2018) - [i29]Luke Metz, Niru Maheswaranathan, Jeremy Nixon, C. Daniel Freeman, Jascha Sohl-Dickstein:
Learned optimizers that outperform SGD on wall-clock and test loss. CoRR abs/1810.10180 (2018) - [i28]Christopher J. Shallue, Jaehoon Lee, Joseph M. Antognini, Jascha Sohl-Dickstein, Roy Frostig, George E. Dahl:
Measuring the Effects of Data Parallelism on Neural Network Training. CoRR abs/1811.03600 (2018) - 2017
- [j3]Badr F. Albanna
, Christopher Hillar, Jascha Sohl-Dickstein, Michael Robert DeWeese:
Minimum and Maximum Entropy Distributions for Binary Systems with Known Means and Pairwise Correlations. Entropy 19(8): 427 (2017) - [c22]Jasmine Collins, Jascha Sohl-Dickstein, David Sussillo:
Capacity and Trainability in Recurrent Neural Networks. ICLR (Poster) 2017 - [c21]Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio:
Density estimation using Real NVP. ICLR (Poster) 2017 - [c20]Justin Gilmer, Colin Raffel, Samuel S. Schoenholz, Maithra Raghu, Jascha Sohl-Dickstein:
Explaining the Learning Dynamics of Direct Feedback Alignment. ICLR (Workshop) 2017 - [c19]Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein:
Unrolled Generative Adversarial Networks. ICLR (Poster) 2017 - [c18]Samuel S. Schoenholz, Justin Gilmer, Surya Ganguli, Jascha Sohl-Dickstein:
Deep Information Propagation. ICLR (Poster) 2017 - [c17]George Tucker, Andriy Mnih, Chris J. Maddison, Jascha Sohl-Dickstein:
REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models. ICLR (Workshop) 2017 - [c16]Jakob N. Foerster, Justin Gilmer, Jascha Sohl-Dickstein, Jan Chorowski
, David Sussillo:
Input Switched Affine Networks: An RNN Architecture Designed for Interpretability. ICML 2017: 1136-1145 - [c15]Maithra Raghu, Ben Poole, Jon M. Kleinberg, Surya Ganguli, Jascha Sohl-Dickstein:
On the Expressive Power of Deep Neural Networks. ICML 2017: 2847-2854 - [c14]Olga Wichrowska, Niru Maheswaranathan, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Nando de Freitas, Jascha Sohl-Dickstein:
Learned Optimizers that Scale and Generalize. ICML 2017: 3751-3760 - [c13]George Tucker, Andriy Mnih, Chris J. Maddison, Dieterich Lawson, Jascha Sohl-Dickstein:
REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models. NIPS 2017: 2627-2636 - [c12]Maithra Raghu, Justin Gilmer, Jason Yosinski, Jascha Sohl-Dickstein:
SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. NIPS 2017: 6076-6085 - [i27]Olga Wichrowska, Niru Maheswaranathan, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Nando de Freitas, Jascha Sohl-Dickstein:
Learned Optimizers that Scale and Generalize. CoRR abs/1703.04813 (2017) - [i26]George Tucker, Andriy Mnih, Chris J. Maddison, Jascha Sohl-Dickstein:
REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models. CoRR abs/1703.07370 (2017) - [i25]Maithra Raghu, Justin Gilmer, Jason Yosinski, Jascha Sohl-Dickstein:
SVCCA: Singular Vector Canonical Correlation Analysis for Deep Understanding and Improvement. CoRR abs/1706.05806 (2017) - [i24]Samuel S. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein:
A Correspondence Between Random Neural Networks and Statistical Field Theory. CoRR abs/1710.06570 (2017) - [i23]Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein:
Deep Neural Networks as Gaussian Processes. CoRR abs/1711.00165 (2017) - [i22]Daniel Levy, Matthew D. Hoffman, Jascha Sohl-Dickstein:
Generalizing Hamiltonian Monte Carlo with Neural Networks. CoRR abs/1711.09268 (2017) - 2016
- [c11]Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, Surya Ganguli:
Exponential expressivity in deep neural networks through transient chaos. NIPS 2016: 3360-3368 - [i21]Subhaneil Lahiri
, Jascha Sohl-Dickstein, Surya Ganguli:
A universal tradeoff between power, precision and speed in physical communication. CoRR abs/1603.07758 (2016) - [i20]Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio:
Density estimation using Real NVP. CoRR abs/1605.08803 (2016) - [i19]Maithra Raghu, Ben Poole, Jon M. Kleinberg, Surya Ganguli, Jascha Sohl-Dickstein:
On the expressive power of deep neural networks. CoRR abs/1606.05336 (2016) - [i18]Ben Poole, Subhaneil Lahiri
, Maithra Raghu, Jascha Sohl-Dickstein, Surya Ganguli:
Exponential expressivity in deep neural networks through transient chaos. CoRR abs/1606.05340 (2016) - [i17]Samuel S. Schoenholz, Justin Gilmer, Surya Ganguli, Jascha Sohl-Dickstein:
Deep Information Propagation. CoRR abs/1611.01232 (2016) - [i16]Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein:
Unrolled Generative Adversarial Networks. CoRR abs/1611.02163 (2016) - [i15]Maithra Raghu, Ben Poole, Jon M. Kleinberg, Surya Ganguli, Jascha Sohl-Dickstein:
Survey of Expressivity in Deep Neural Networks. CoRR abs/1611.08083 (2016) - [i14]Jakob N. Foerster, Justin Gilmer, Jan Chorowski, Jascha Sohl-Dickstein, David Sussillo:
Intelligible Language Modeling with Input Switched Affine Networks. CoRR abs/1611.09434 (2016) - [i13]Jasmine Collins, Jascha Sohl-Dickstein, David Sussillo:
Capacity and Trainability in Recurrent Neural Networks. CoRR abs/1611.09913 (2016) - [i12]Ben Poole, Alexander A. Alemi, Jascha Sohl-Dickstein, Anelia Angelova:
Improved generator objectives for GANs. CoRR abs/1612.02780 (2016) - 2015
- [j2]Jascha Sohl-Dickstein, Santani Teng, Benjamin M. Gaub, Chris C. Rodgers
, Crystal Li, Michael Robert DeWeese, Nicol S. Harper
:
A Device for Human Ultrasonic Echolocation. IEEE Trans. Biomed. Eng. 62(6): 1526-1534 (2015) - [c10]Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, Surya Ganguli:
Deep Unsupervised Learning using Nonequilibrium Thermodynamics. ICML 2015: 2256-2265 - [c9]Chris Piech, Jonathan Bassen, Jonathan Huang, Surya Ganguli, Mehran Sahami, Leonidas J. Guibas, Jascha Sohl-Dickstein:
Deep Knowledge Tracing. NIPS 2015: 505-513 - [i11]Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, Surya Ganguli:
Deep Unsupervised Learning using Nonequilibrium Thermodynamics. CoRR abs/1503.03585 (2015) - [i10]Jascha Sohl-Dickstein, Diederik P. Kingma:
Technical Note on Equivalence Between Recurrent Neural Network Time Series Models and Variational Bayesian Models. CoRR abs/1504.08025 (2015) - [i9]Chris Piech, Jonathan Spencer, Jonathan Huang, Surya Ganguli, Mehran Sahami, Leonidas J. Guibas, Jascha Sohl-Dickstein:
Deep Knowledge Tracing. CoRR abs/1506.05908 (2015) - 2014
- [j1]Urs Köster, Jascha Sohl-Dickstein, Charles M. Gray, Bruno A. Olshausen:
Modeling Higher-Order Correlations within Cortical Microcolumns. PLoS Comput. Biol. 10(7) (2014) - [c8]Jascha Sohl-Dickstein, Ben Poole, Surya Ganguli:
Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods. ICML 2014: 604-612 - [c7]Jascha Sohl-Dickstein, Mayur Mudigonda, Michael Robert DeWeese:
Hamiltonian Monte Carlo Without Detailed Balance. ICML 2014: 719-726 - [i8]Ben Poole, Jascha Sohl-Dickstein, Surya Ganguli:
Analyzing noise in autoencoders and deep networks. CoRR abs/1406.1831 (2014) - 2013
- [c6]Eliana Feasley, Chris Klaiber, James Irwin, Jace Kohlmeier, Jascha Sohl-Dickstein:
Controlled experiments on millions of students to personalize learning. AIED Workshops 2013 - [c5]Joseph Jay Williams, Dave Paunesku, Benjamin Heley, Jascha Sohl-Dickstein:
Measurably Increasing Motivation in MOOCs. AIED Workshops 2013 - [i7]Jascha Sohl-Dickstein, Ben Poole, Surya Ganguli:
An adaptive low dimensional quasi-Newton sum of functions optimizer. CoRR abs/1311.2115 (2013) - 2012
- [b1]Jascha Sohl-Dickstein:
Efficient Methods for Unsupervised Learning of Probabilistic Models. University of California, Berkeley, USA, 2012 - [c4]Lucas Theis, Jascha Sohl-Dickstein, Matthias Bethge:
Training sparse natural image models with a fast Gibbs sampler of an extended state space. NIPS 2012: 1133-1141 - [i6]Jascha Sohl-Dickstein:
The Natural Gradient by Analogy to Signal Whitening, and Recipes and Tricks for its Use. CoRR abs/1205.1828 (2012) - [i5]Jascha Sohl-Dickstein, Benjamin J. Culpepper:
Hamiltonian Annealed Importance Sampling for partition function estimation. CoRR abs/1205.1925 (2012) - [i4]Jascha Sohl-Dickstein:
Hamiltonian Monte Carlo with Reduced Momentum Flips. CoRR abs/1205.1939 (2012) - [i3]Jascha Sohl-Dickstein:
Efficient Methods for Unsupervised Learning of Probabilistic Models. CoRR abs/1205.4295 (2012) - 2011
- [c3]Ching Ming Wang, Jascha Sohl-Dickstein, Ivana Tosic, Bruno A. Olshausen:
Lie Group Transformation Models for Predictive Video Coding. DCC 2011: 83-92 - [c2]Benjamin J. Culpepper, Jascha Sohl-Dickstein, Bruno A. Olshausen:
Building a better probabilistic model of images by factorization. ICCV 2011: 2011-2017 - [c1]Jascha Sohl-Dickstein, Peter Battaglino, Michael Robert DeWeese:
Minimum Probability Flow Learning. ICML 2011: 905-912 - 2010
- [i2]Jascha Sohl-Dickstein, Jimmy C. Wang, Bruno A. Olshausen:
An Unsupervised Algorithm For Learning Lie Group Transformations. CoRR abs/1001.1027 (2010)
2000 – 2009
- 2009
- [i1]Jascha Sohl-Dickstein, Peter Battaglino, Michael Robert DeWeese:
Minimum Probability Flow Learning. CoRR abs/0906.4779 (2009)
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
load content from web.archive.org
Privacy notice: By enabling the option above, your browser will contact the API of web.archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
Tweets on dblp homepage
Show tweets from on the dblp homepage.
Privacy notice: By enabling the option above, your browser will contact twitter.com and twimg.com to load tweets curated by our Twitter account. At the same time, Twitter will persistently store several cookies with your web browser. While we did signal Twitter to not track our users by setting the "dnt" flag, we do not have any control over how Twitter uses your data. So please proceed with care and consider checking the Twitter privacy policy.
last updated on 2021-01-24 22:50 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint