


Остановите войну!
for scientists:
Nathan Srebro
Nati Srebro
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2022
- [i105]Gal Vardi, Ohad Shamir, Nathan Srebro:
The Sample Complexity of One-Hidden-Layer Neural Networks. CoRR abs/2202.06233 (2022) - [i104]Idan Amir, Roi Livni, Nathan Srebro:
Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization. CoRR abs/2202.13328 (2022) - 2021
- [j13]Chenxin Ma
, Martin Jaggi
, Frank E. Curtis
, Nathan Srebro
, Martin Takác
:
An accelerated communication-efficient primal-dual optimization framework for structured machine learning. Optim. Methods Softw. 36(1): 20-44 (2021) - [c127]Suriya Gunasekar, Blake E. Woodworth, Nathan Srebro:
Mirrorless Mirror Descent: A Natural Derivation of Mirror Descent. AISTATS 2021: 2305-2313 - [c126]Pritish Kamath, Akilesh Tangella, Danica J. Sutherland, Nathan Srebro:
Does Invariant Risk Minimization Capture Invariance? AISTATS 2021: 4069-4077 - [c125]Omar Montasser, Steve Hanneke, Nathan Srebro:
Adversarially Robust Learning with Unknown Perturbation Sets. COLT 2021: 3452-3482 - [c124]Blake E. Woodworth, Brian Bullins, Ohad Shamir, Nathan Srebro:
The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication. COLT 2021: 4386-4437 - [c123]Raman Arora, Peter Bartlett, Poorya Mianjy, Nathan Srebro:
Dropout: Explicit Forms and Capacity Control. ICML 2021: 351-361 - [c122]Shahar Azulay, Edward Moroshko, Mor Shpigel Nacson, Blake E. Woodworth, Nathan Srebro, Amir Globerson, Daniel Soudry:
On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent. ICML 2021: 468-477 - [c121]Ziwei Ji, Nathan Srebro, Matus Telgarsky:
Fast margin maximization via dual acceleration. ICML 2021: 4860-4869 - [c120]Eran Malach, Pritish Kamath, Emmanuel Abbe, Nathan Srebro:
Quantifying the Benefit of Using Differentiable Learning over Tangent Kernels. ICML 2021: 7379-7389 - [c119]Blake E. Woodworth, Nathan Srebro:
An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning. NeurIPS 2021: 7333-7345 - [c118]Frederic Koehler, Lijia Zhou, Danica J. Sutherland, Nathan Srebro:
Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds and Benign Overfitting. NeurIPS 2021: 20657-20668 - [c117]Emmanuel Abbe, Pritish Kamath, Eran Malach, Colin Sandon, Nathan Srebro:
On the Power of Differentiable Learning versus PAC and SQ Learning. NeurIPS 2021: 24340-24351 - [c116]Brian Bullins, Kumar Kshitij Patel, Ohad Shamir, Nathan Srebro, Blake E. Woodworth:
A Stochastic Newton Algorithm for Distributed Convex Optimization. NeurIPS 2021: 26818-26830 - [c115]Zhen Dai, Mina Karzand, Nathan Srebro:
Representation Costs of Linear Neural Networks: Analysis and Design. NeurIPS 2021: 26884-26896 - [i103]Pritish Kamath, Akilesh Tangella, Danica J. Sutherland
, Nathan Srebro:
Does Invariant Risk Minimization Capture Invariance? CoRR abs/2101.01134 (2021) - [i102]Blake E. Woodworth, Brian Bullins, Ohad Shamir, Nathan Srebro:
The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication. CoRR abs/2102.01583 (2021) - [i101]Omar Montasser, Steve Hanneke, Nathan Srebro:
Adversarially Robust Learning with Unknown Perturbation Sets. CoRR abs/2102.02145 (2021) - [i100]Shahar Azulay, Edward Moroshko, Mor Shpigel Nacson, Blake E. Woodworth, Nathan Srebro, Amir Globerson, Daniel Soudry:
On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent. CoRR abs/2102.09769 (2021) - [i99]Eran Malach, Pritish Kamath, Emmanuel Abbe, Nathan Srebro:
Quantifying the Benefit of Using Differentiable Learning over Tangent Kernels. CoRR abs/2103.01210 (2021) - [i98]Gene Li, Pritish Kamath, Dylan J. Foster, Nathan Srebro:
Eluder Dimension and Generalized Rank. CoRR abs/2104.06970 (2021) - [i97]Blake E. Woodworth, Nathan Srebro:
An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning. CoRR abs/2106.02720 (2021) - [i96]Frederic Koehler, Lijia Zhou, Danica J. Sutherland, Nathan Srebro:
Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds, and Benign Overfitting. CoRR abs/2106.09276 (2021) - [i95]Ziwei Ji, Nathan Srebro, Matus Telgarsky:
Fast Margin Maximization via Dual Acceleration. CoRR abs/2107.00595 (2021) - [i94]Emmanuel Abbe, Pritish Kamath, Eran Malach, Colin Sandon, Nathan Srebro:
On the Power of Differentiable Learning versus PAC and SQ Learning. CoRR abs/2108.04190 (2021) - [i93]Gal Vardi, Ohad Shamir, Nathan Srebro:
On Margin Maximization in Linear and ReLU Networks. CoRR abs/2110.02732 (2021) - [i92]Brian Bullins, Kumar Kshitij Patel, Ohad Shamir, Nathan Srebro, Blake E. Woodworth:
A Stochastic Newton Algorithm for Distributed Convex Optimization. CoRR abs/2110.02954 (2021) - [i91]Omar Montasser, Steve Hanneke, Nathan Srebro:
Transductive Robust Learning Guarantees. CoRR abs/2110.10602 (2021) - [i90]Lijia Zhou, Frederic Koehler, Danica J. Sutherland, Nathan Srebro:
Optimistic Rates: A Unifying Theory for Interpolation Learning and Regularization in Linear Regression. CoRR abs/2112.04470 (2021) - [i89]Gene Li, Junbo Li, Nathan Srebro, Zhaoran Wang, Zhuoran Yang:
Exponential Family Model-Based Reinforcement Learning via Score Matching. CoRR abs/2112.14195 (2021) - 2020
- [c114]Ryan Rogers, Aaron Roth, Adam D. Smith, Nathan Srebro, Om Thakkar, Blake E. Woodworth:
Guaranteed Validity for Empirical Approaches to Adaptive Data Analysis. AISTATS 2020: 2830-2840 - [c113]Yossi Arjevani, Ohad Shamir, Nathan Srebro:
A Tight Convergence Analysis for Stochastic Gradient Descent with Delayed Updates. ALT 2020: 111-132 - [c112]Pritish Kamath, Omar Montasser, Nathan Srebro:
Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity. COLT 2020: 2236-2262 - [c111]Blake E. Woodworth, Suriya Gunasekar, Jason D. Lee, Edward Moroshko, Pedro Savarese, Itay Golan, Daniel Soudry, Nathan Srebro:
Kernel and Rich Regimes in Overparametrized Models. COLT 2020: 3635-3673 - [c110]Greg Ongie, Rebecca Willett, Daniel Soudry, Nathan Srebro:
A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case. ICLR 2020 - [c109]Omar Montasser, Surbhi Goel, Ilias Diakonikolas, Nathan Srebro:
Efficiently Learning Adversarially Robust Halfspaces with Noise. ICML 2020: 7010-7021 - [c108]Hussein Mozannar, Mesrob I. Ohannessian, Nathan Srebro:
Fair Learning with Private Demographic Data. ICML 2020: 7066-7075 - [c107]Blake E. Woodworth, Kumar Kshitij Patel, Sebastian U. Stich, Zhen Dai, Brian Bullins, H. Brendan McMahan, Ohad Shamir, Nathan Srebro:
Is Local SGD Better than Minibatch SGD? ICML 2020: 10334-10343 - [c106]Omar Montasser, Steve Hanneke, Nati Srebro:
Reducing Adversarially Robust Learning to Non-Robust PAC Learning. NeurIPS 2020 - [c105]Edward Moroshko, Blake E. Woodworth, Suriya Gunasekar, Jason D. Lee, Nati Srebro, Daniel Soudry:
Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy. NeurIPS 2020 - [c104]Blake E. Woodworth, Kumar Kshitij Patel, Nati Srebro:
Minibatch vs Local SGD for Heterogeneous Distributed Learning. NeurIPS 2020 - [c103]Lijia Zhou, Danica J. Sutherland, Nati Srebro:
On Uniform Convergence and Low-Norm Interpolation Learning. NeurIPS 2020 - [i88]Blake E. Woodworth, Kumar Kshitij Patel
, Sebastian U. Stich, Zhen Dai, Brian Bullins, H. Brendan McMahan, Ohad Shamir, Nathan Srebro:
Is Local SGD Better than Minibatch SGD? CoRR abs/2002.07839 (2020) - [i87]Blake E. Woodworth, Suriya Gunasekar, Jason D. Lee, Edward Moroshko, Pedro Savarese, Itay Golan, Daniel Soudry, Nathan Srebro:
Kernel and Rich Regimes in Overparametrized Models. CoRR abs/2002.09277 (2020) - [i86]Hussein Mozannar, Mesrob I. Ohannessian, Nathan Srebro:
Fair Learning with Private Demographic Data. CoRR abs/2002.11651 (2020) - [i85]Raman Arora, Peter Bartlett, Poorya Mianjy, Nathan Srebro:
Dropout: Explicit Forms and Capacity Control. CoRR abs/2003.03397 (2020) - [i84]Pritish Kamath, Omar Montasser, Nathan Srebro:
Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity. CoRR abs/2003.04180 (2020) - [i83]Suriya Gunasekar, Blake E. Woodworth, Nathan Srebro:
Mirrorless Mirror Descent: A More Natural Discretization of Riemannian Gradient Flow. CoRR abs/2004.01025 (2020) - [i82]Omar Montasser, Surbhi Goel, Ilias Diakonikolas, Nathan Srebro:
Efficiently Learning Adversarially Robust Halfspaces with Noise. CoRR abs/2005.07652 (2020) - [i81]Blake E. Woodworth, Kumar Kshitij Patel, Nathan Srebro:
Minibatch vs Local SGD for Heterogeneous Distributed Learning. CoRR abs/2006.04735 (2020) - [i80]Lijia Zhou, Danica J. Sutherland
, Nathan Srebro:
On Uniform Convergence and Low-Norm Interpolation Learning. CoRR abs/2006.05942 (2020) - [i79]Keshav Vemuri, Nathan Srebro:
Predictive Value Generalization Bounds. CoRR abs/2007.05073 (2020) - [i78]Edward Moroshko, Suriya Gunasekar, Blake E. Woodworth, Jason D. Lee, Nathan Srebro, Daniel Soudry:
Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy. CoRR abs/2007.06738 (2020) - [i77]Omar Montasser, Steve Hanneke, Nathan Srebro:
Reducing Adversarially Robust Learning to Non-Robust PAC Learning. CoRR abs/2010.12039 (2020)
2010 – 2019
- 2019
- [j12]Chao Gao, Dan Garber, Nathan Srebro, Jialei Wang, Weiran Wang:
Stochastic Canonical Correlation Analysis. J. Mach. Learn. Res. 20: 167:1-167:46 (2019) - [c102]Mor Shpigel Nacson, Nathan Srebro, Daniel Soudry:
Stochastic Gradient Descent on Separable Data: Exact Convergence with a Fixed Learning Rate. AISTATS 2019: 3051-3059 - [c101]Mor Shpigel Nacson, Jason D. Lee, Suriya Gunasekar, Pedro Henrique Pamplona Savarese, Nathan Srebro, Daniel Soudry:
Convergence of Gradient Descent on Separable Data. AISTATS 2019: 3420-3428 - [c100]Weiran Wang, Nathan Srebro:
Stochastic Nonconvex Optimization with Large Minibatches. ALT 2019: 856-881 - [c99]Dylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake E. Woodworth:
The Complexity of Making the Gradient Small in Stochastic Convex Optimization. COLT 2019: 1319-1345 - [c98]Omar Montasser, Steve Hanneke, Nathan Srebro:
VC Classes are Adversarially Robustly Learnable, but Only Improperly. COLT 2019: 2512-2530 - [c97]Pedro Savarese, Itay Evron, Daniel Soudry, Nathan Srebro:
How do infinite width bounded norm networks look in function space? COLT 2019: 2667-2690 - [c96]Blake E. Woodworth, Nathan Srebro:
Open Problem: The Oracle Complexity of Convex Optimization with Limited Memory. COLT 2019: 3202-3210 - [c95]Hussein Mouzannar, Mesrob I. Ohannessian, Nathan Srebro:
From Fair Decision Making To Social Equality. FAT 2019: 359-368 - [c94]Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, Nathan Srebro:
The role of over-parametrization in generalization of neural networks. ICLR (Poster) 2019 - [c93]Andrew Cotter, Maya R. Gupta, Heinrich Jiang, Nathan Srebro, Karthik Sridharan, Serena Wang, Blake E. Woodworth, Seungil You:
Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints. ICML 2019: 1397-1405 - [c92]Hubert Eichner, Tomer Koren, Brendan McMahan, Nathan Srebro, Kunal Talwar:
Semi-Cyclic Stochastic Gradient Descent. ICML 2019: 1764-1773 - [c91]Mor Shpigel Nacson, Suriya Gunasekar, Jason D. Lee, Nathan Srebro, Daniel Soudry:
Lexicographic and Depth-Sensitive Margins in Homogeneous and Non-Homogeneous Deep Models. ICML 2019: 4683-4692 - [i76]Nandana Sengupta, Nati Srebro, James Evans:
Simple Surveys: Response Retrieval Inspired by Recommendation Systems. CoRR abs/1901.09659 (2019) - [i75]Omar Montasser, Steve Hanneke, Nathan Srebro:
VC Classes are Adversarially Robustly Learnable, but Only Improperly. CoRR abs/1902.04217 (2019) - [i74]Dylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake E. Woodworth:
The Complexity of Making the Gradient Small in Stochastic Convex Optimization. CoRR abs/1902.04686 (2019) - [i73]Pedro Savarese, Itay Evron, Daniel Soudry, Nathan Srebro:
How do infinite width bounded norm networks look in function space? CoRR abs/1902.05040 (2019) - [i72]Hubert Eichner, Tomer Koren, H. Brendan McMahan, Nathan Srebro, Kunal Talwar:
Semi-Cyclic Stochastic Gradient Descent. CoRR abs/1904.10120 (2019) - [i71]Mor Shpigel Nacson, Suriya Gunasekar, Jason D. Lee, Nathan Srebro, Daniel Soudry:
Lexicographic and Depth-Sensitive Margins in Homogeneous and Non-Homogeneous Deep Models. CoRR abs/1905.07325 (2019) - [i70]Ryan Rogers, Aaron Roth, Adam D. Smith, Nathan Srebro, Om Thakkar, Blake E. Woodworth:
Guaranteed Validity for Empirical Approaches to Adaptive Data Analysis. CoRR abs/1906.09231 (2019) - [i69]Blake E. Woodworth, Nathan Srebro:
Open Problem: The Oracle Complexity of Convex Optimization with Limited Memory. CoRR abs/1907.00762 (2019) - [i68]Greg Ongie, Rebecca Willett, Daniel Soudry, Nathan Srebro:
A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case. CoRR abs/1910.01635 (2019) - [i67]Yossi Arjevani, Yair Carmon, John C. Duchi, Dylan J. Foster, Nathan Srebro, Blake E. Woodworth:
Lower Bounds for Non-Convex Stochastic Optimization. CoRR abs/1912.02365 (2019) - 2018
- [j11]Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, Nathan Srebro:
The Implicit Bias of Gradient Descent on Separable Data. J. Mach. Learn. Res. 19: 70:1-70:57 (2018) - [c90]Jialei Wang, Weiran Wang, Dan Garber, Nathan Srebro:
Efficient coordinate-wise leading eigenvector computation. ALT 2018: 806-820 - [c89]Behnam Neyshabur, Srinadh Bhojanapalli, Nathan Srebro:
A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks. ICLR (Poster) 2018 - [c88]Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Nathan Srebro:
The Implicit Bias of Gradient Descent on Separable Data. ICLR (Poster) 2018 - [c87]Suriya Gunasekar, Jason D. Lee, Daniel Soudry, Nathan Srebro:
Characterizing Implicit Bias in Terms of Optimization Geometry. ICML 2018: 1827-1836 - [c86]Suriya Gunasekar, Blake E. Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, Nathan Srebro:
Implicit Regularization in Matrix Factorization. ITA 2018: 1-10 - [c85]Blake E. Woodworth, Vitaly Feldman, Saharon Rosset, Nati Srebro:
The Everlasting Database: Statistical Validity at a Fair Price. NeurIPS 2018: 6532-6541 - [c84]Avrim Blum, Suriya Gunasekar, Thodoris Lykouris, Nati Srebro:
On preserving non-discrimination when combining expert advice. NeurIPS 2018: 8386-8397 - [c83]Blake E. Woodworth, Jialei Wang, Adam D. Smith, Brendan McMahan, Nati Srebro:
Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization. NeurIPS 2018: 8505-8515 - [c82]Suriya Gunasekar, Jason D. Lee, Daniel Soudry, Nati Srebro:
Implicit Bias of Gradient Descent on Linear Convolutional Networks. NeurIPS 2018: 9482-9491 - [i66]Weiran Wang, Jialei Wang, Mladen Kolar, Nathan Srebro:
Distributed Stochastic Multi-Task Learning with Graph Regularization. CoRR abs/1802.03830 (2018) - [i65]Suriya Gunasekar, Jason D. Lee, Daniel Soudry, Nathan Srebro:
Characterizing Implicit Bias in Terms of Optimization Geometry. CoRR abs/1802.08246 (2018) - [i64]Mor Shpigel Nacson, Jason D. Lee, Suriya Gunasekar, Nathan Srebro, Daniel Soudry:
Convergence of Gradient Descent on Separable Data. CoRR abs/1803.01905 (2018) - [i63]Blake E. Woodworth, Vitaly Feldman, Saharon Rosset, Nathan Srebro:
The Everlasting Database: Statistical Validity at a Fair Price. CoRR abs/1803.04307 (2018) - [i62]Blake E. Woodworth, Jialei Wang, Brendan McMahan, Nathan Srebro:
Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization. CoRR abs/1805.10222 (2018) - [i61]Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, Nathan Srebro:
Towards Understanding the Role of Over-Parametrization in Generalization of Neural Networks. CoRR abs/1805.12076 (2018) - [i60]Suriya Gunasekar, Jason D. Lee, Daniel Soudry, Nathan Srebro:
Implicit Bias of Gradient Descent on Linear Convolutional Networks. CoRR abs/1806.00468 (2018) - [i59]Mor Shpigel Nacson, Nathan Srebro, Daniel Soudry:
Stochastic Gradient Descent on Separable Data: Exact Convergence with a Fixed Learning Rate. CoRR abs/1806.01796 (2018) - [i58]Yossi Arjevani, Ohad Shamir, Nathan Srebro:
A Tight Convergence Analysis for Stochastic Gradient Descent with Delayed Updates. CoRR abs/1806.10188 (2018) - [i57]Andrew Cotter, Maya R. Gupta, Heinrich Jiang, Nathan Srebro, Karthik Sridharan, Serena Wang, Blake E. Woodworth, Seungil You:
Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints. CoRR abs/1807.00028 (2018) - [i56]Avrim Blum, Suriya Gunasekar, Thodoris Lykouris, Nathan Srebro:
On preserving non-discrimination when combining expert advice. CoRR abs/1810.11829 (2018) - [i55]Hussein Mouzannar, Mesrob I. Ohannessian, Nathan Srebro:
From Fair Decision Making to Social Equality. CoRR abs/1812.02952 (2018) - 2017
- [j10]Avleen Singh Bijral, Anand D. Sarwate
, Nathan Srebro:
Data-Dependent Convergence for Consensus Stochastic Optimization. IEEE Trans. Autom. Control. 62(9): 4483-4498 (2017) - [c81]Jialei Wang, Jason D. Lee, Mehrdad Mahdavi, Mladen Kolar, Nati Srebro:
Sketching Meets Random Projection in the Dual: A Provable Recovery Algorithm for Big and High-dimensional Data. AISTATS 2017: 1150-1158 - [c80]Jialei Wang, Weiran Wang, Nathan Srebro:
Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch Prox. COLT 2017: 1882-1919 - [c79]Blake E. Woodworth, Suriya Gunasekar, Mesrob I. Ohannessian, Nathan Srebro:
Learning Non-Discriminatory Predictors. COLT 2017: 1920-1953 - [c78]Dan Garber, Ohad Shamir, Nathan Srebro:
Communication-efficient Algorithms for Distributed Stochastic Principal Component Analysis. ICML 2017: 1203-1212 - [c77]Jialei Wang, Mladen Kolar, Nathan Srebro, Tong Zhang:
Efficient Distributed Learning with Sparsity. ICML 2017: 3636-3645 - [c76]Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nati Srebro, Benjamin Recht:
The Marginal Value of Adaptive Gradient Methods in Machine Learning. NIPS 2017: 4148-4158 - [c75]Raman Arora, Teodor Vanislavov Marinov, Poorya Mianjy, Nati Srebro:
Stochastic Approximation for Canonical Correlation Analysis. NIPS 2017: 4775-4784 - [c74]Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, Nati Srebro:
Exploring Generalization in Deep Learning. NIPS 2017: 5947-5956 - [c73]Suriya Gunasekar, Blake E. Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, Nati Srebro:
Implicit Regularization in Matrix Factorization. NIPS 2017: 6151-6159 - [i54]Blake E. Woodworth, Suriya Gunasekar, Mesrob I. Ohannessian, Nathan Srebro:
Learning Non-Discriminatory Predictors. CoRR abs/1702.06081 (2017) - [i53]Jialei Wang, Weiran Wang, Nathan Srebro:
Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch Prox. CoRR abs/1702.06269 (2017) - [i52]Chao Gao, Dan Garber, Nathan Srebro, Jialei Wang, Weiran Wang:
Stochastic Canonical Correlation Analysis. CoRR abs/1702.06533 (2017) - [i51]Jialei Wang, Weiran Wang, Dan Garber, Nathan Srebro:
Efficient coordinate-wise leading eigenvector computation. CoRR abs/1702.07834 (2017) - [i50]Dan Garber, Ohad Shamir, Nathan Srebro:
Communication-efficient Algorithms for Distributed Stochastic Principal Component Analysis. CoRR abs/1702.08169 (2017) - [i49]Behnam Neyshabur, Ryota Tomioka, Ruslan Salakhutdinov, Nathan Srebro:
Geometry of Optimization and Implicit Regularization in Deep Learning. CoRR abs/1705.03071 (2017) - [i48]Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht:
The Marginal Value of Adaptive Gradient Methods in Machine Learning. CoRR abs/1705.08292 (2017) - [i47]Suriya Gunasekar, Blake E. Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, Nathan Srebro:
Implicit Regularization in Matrix Factorization. CoRR abs/1705.09280 (2017) - [i46]Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, Nathan Srebro:
Exploring Generalization in Deep Learning. CoRR abs/1706.08947 (2017) - [i45]Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, Nathan Srebro:
A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks. CoRR abs/1707.09564 (2017) - [i44]Weiran Wang, Nathan Srebro:
Stochastic Nonconvex Optimization with Large Minibatches. CoRR abs/1709.08728 (2017) - [i43]Daniel Soudry, Elad Hoffer, Nathan Srebro:
The Implicit Bias of Gradient Descent on Separable Data. CoRR abs/1710.10345 (2017) - [i42]Chenxin Ma, Martin Jaggi, Frank E. Curtis, Nathan Srebro, Martin Takác:
An Accelerated Communication-Efficient Primal-Dual Optimization Framework for Structured Machine Learning. CoRR abs/1711.05305 (2017) - 2016
- [j9]Deanna Needell, Nathan Srebro, Rachel Ward:
Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm. Math. Program. 155(1-2): 549-573 (2016) - [c72]Heejin Choi, Ofer Meshi, Nathan Srebro:
Fast and Scalable Structural SVM with Slack Rescaling. AISTATS 2016: 667-675 - [c71]Jialei Wang, Mladen Kolar, Nathan Srebro:
Distributed Multi-Task Learning. AISTATS 2016: 751-760 - [c70]Avleen Singh Bijral, Anand D. Sarwate
, Nathan Srebro:
Data-dependent bounds on network gradient descent. Allerton 2016: 869-874 - [c69]Weiran Wang, Jialei Wang, Dan Garber, Nati Srebro:
Efficient Globally Convergent Stochastic Optimization for Canonical Correlation Analysis. NIPS 2016: 766-774 - [c68]Moritz Hardt, Eric Price, Nati Srebro:
Equality of Opportunity in Supervised Learning. NIPS 2016: 3315-3323 - [c67]Behnam Neyshabur, Yuhuai Wu, Ruslan Salakhutdinov, Nati Srebro:
Path-Normalized Optimization of Recurrent Neural Networks with ReLU Activations. NIPS 2016: 3477-3485 - [c66]Blake E. Woodworth, Nati Srebro:
Tight Complexity Bounds for Optimizing Composite Objectives. NIPS 2016: 3639-3647 - [c65]