default search action
Praneeth Netrapalli
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [c61]Arun Sai Suggala, Y. Jennifer Sun, Praneeth Netrapalli, Elad Hazan:
Second Order Methods for Bandit Optimization and Control. COLT 2024: 4691-4763 - [c60]Aishwarya P. S., Pranav Ajit Nair, Yashas Samaga, Toby Boyd, Sanjiv Kumar, Prateek Jain, Praneeth Netrapalli:
Tandem Transformers for Inference Efficient LLMs. ICML 2024 - [i68]Aishwarya P. S., Pranav Ajit Nair, Yashas Samaga, Toby Boyd, Sanjiv Kumar, Prateek Jain, Praneeth Netrapalli:
Tandem Transformers for Inference Efficient LLMs. CoRR abs/2402.08644 (2024) - [i67]Arun Sai Suggala, Y. Jennifer Sun, Praneeth Netrapalli, Elad Hazan:
Second Order Methods for Bandit Optimization and Control. CoRR abs/2402.08929 (2024) - [i66]Yashas Samaga, Varun Yerram, Chong You, Srinadh Bhojanapalli, Sanjiv Kumar, Prateek Jain, Praneeth Netrapalli:
HiRE: High Recall Approximate Top-k Estimation for Efficient LLM Inference. CoRR abs/2402.09360 (2024) - 2023
- [c59]Aniket Das, Dheeraj M. Nagaraj, Praneeth Netrapalli, Dheeraj Baby:
Near Optimal Heteroscedastic Regression with Symbiotic Learning. COLT 2023: 3696-3757 - [c58]Sravanti Addepalli, Anshul Nasery, Venkatesh Babu Radhakrishnan, Praneeth Netrapalli, Prateek Jain:
Feature Reconstruction From Outputs Can Mitigate Simplicity Bias in Neural Networks. ICLR 2023 - [c57]Dheeraj Mysore Nagaraj, Suhas S. Kowshik, Naman Agarwal, Praneeth Netrapalli, Prateek Jain:
Multi-User Reinforcement Learning with Low Rank Rewards. ICML 2023: 25627-25659 - [c56]Depen Morwani, Jatin Batra, Prateek Jain, Praneeth Netrapalli:
Simplicity Bias in 1-Hidden Layer Neural Networks. NeurIPS 2023 - [c55]Qinghua Liu, Praneeth Netrapalli, Csaba Szepesvári, Chi Jin:
Optimistic MLE: A Generic Model-Based Algorithm for Partially Observable Sequential Decision Making. STOC 2023: 363-376 - [i65]Depen Morwani, Jatin Batra, Prateek Jain, Praneeth Netrapalli:
Simplicity Bias in 1-Hidden Layer Neural Networks. CoRR abs/2302.00457 (2023) - [i64]Dheeraj Baby, Aniket Das, Dheeraj Nagaraj, Praneeth Netrapalli:
Near Optimal Heteroscedastic Regression with Symbiotic Learning. CoRR abs/2306.14288 (2023) - [i63]Lénaïc Chizat, Praneeth Netrapalli:
Steering Deep Feature Learning with Backward Aligned Feature Updates. CoRR abs/2311.18718 (2023) - 2022
- [c54]Naman Agarwal, Syomantak Chaudhuri, Prateek Jain, Dheeraj Mysore Nagaraj, Praneeth Netrapalli:
Online Target Q-learning with Reverse Experience Replay: Efficiently finding the Optimal Policy for Linear MDPs. ICLR 2022 - [c53]Tanner Fiez, Chi Jin, Praneeth Netrapalli, Lillian J. Ratliff:
Minimax Optimization with Smooth Algorithmic Adversaries. ICLR 2022 - [c52]Vihari Piratla, Praneeth Netrapalli, Sunita Sarawagi:
Focus on the Common Good: Group Distributional Robustness Follows. ICLR 2022 - [c51]Kwangjun Ahn, Prateek Jain, Ziwei Ji, Satyen Kale, Praneeth Netrapalli, Gil I. Shamir:
Reproducibility in Optimization: Theoretical Framework and Limits. NeurIPS 2022 - [i62]Kwangjun Ahn, Prateek Jain, Ziwei Ji, Satyen Kale, Praneeth Netrapalli, Gil I. Shamir:
Reproducibility in Optimization: Theoretical Framework and Limits. CoRR abs/2202.04598 (2022) - [i61]Kushal Majmundar, Sachin Goyal, Praneeth Netrapalli, Prateek Jain:
MET: Masked Encoding for Tabular Data. CoRR abs/2206.08564 (2022) - [i60]Ashwin Vaswani, Gaurav Aggarwal, Praneeth Netrapalli, Narayan G. Hegde:
All Mistakes Are Not Equal: Comprehensive Hierarchy Aware Multi-label Predictions (CHAMP). CoRR abs/2206.08653 (2022) - [i59]Anshul Nasery, Sravanti Addepalli, Praneeth Netrapalli, Prateek Jain:
DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization. CoRR abs/2208.09139 (2022) - [i58]Qinghua Liu, Praneeth Netrapalli, Csaba Szepesvári, Chi Jin:
Optimistic MLE - A Generic Model-based Algorithm for Partially Observable Sequential Decision Making. CoRR abs/2209.14997 (2022) - [i57]Sravanti Addepalli, Anshul Nasery, R. Venkatesh Babu, Praneeth Netrapalli, Prateek Jain:
Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks. CoRR abs/2210.01360 (2022) - [i56]Naman Agarwal, Prateek Jain, Suhas S. Kowshik, Dheeraj Nagaraj, Praneeth Netrapalli:
Multi-User Reinforcement Learning with Low Rank Rewards. CoRR abs/2210.05355 (2022) - [i55]Harikrishna Narasimhan, Harish G. Ramaswamy, Shiv Kumar Tavker, Drona Khurana, Praneeth Netrapalli, Shivani Agarwal:
Consistent Multiclass Algorithms for Complex Metrics and Constraints. CoRR abs/2210.09695 (2022) - 2021
- [j7]Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M. Kakade, Michael I. Jordan:
On Nonconvex Optimization for Machine Learning: Gradients, Stochasticity, and Saddle Points. J. ACM 68(2): 11:1-11:29 (2021) - [j6]Prateek Jain, Dheeraj M. Nagaraj, Praneeth Netrapalli:
Making the Last Iterate of SGD Information Theoretically Optimal. SIAM J. Optim. 31(2): 1108-1130 (2021) - [c50]Arun Sai Suggala, Pradeep Ravikumar, Praneeth Netrapalli:
Efficient Bandit Convex Optimization: Beyond Linear Losses. COLT 2021: 4008-4067 - [c49]Aadirupa Saha, Nagarajan Natarajan, Praneeth Netrapalli, Prateek Jain:
Optimal regret algorithm for Pseudo-1d Bandit Convex Optimization. ICML 2021: 9255-9264 - [c48]Ankit Garg, Robin Kothari, Praneeth Netrapalli, Suhail Sherif:
No Quantum Speedup over Gradient Descent for Non-Smooth Convex Optimization. ITCS 2021: 53:1-53:20 - [c47]Harshay Shah, Prateek Jain, Praneeth Netrapalli:
Do Input Gradients Highlight Discriminative Features? NeurIPS 2021: 2046-2059 - [c46]Suhas S. Kowshik, Dheeraj Nagaraj, Prateek Jain, Praneeth Netrapalli:
Near-optimal Offline and Streaming Algorithms for Learning Non-Linear Dynamical Systems. NeurIPS 2021: 8518-8531 - [c45]Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh:
Statistically and Computationally Efficient Linear Meta-representation Learning. NeurIPS 2021: 18487-18500 - [c44]Ankit Garg, Robin Kothari, Praneeth Netrapalli, Suhail Sherif:
Near-Optimal Lower Bounds For Convex Optimization For All Orders of Smoothness. NeurIPS 2021: 29874-29884 - [c43]Prateek Jain, Suhas S. Kowshik, Dheeraj Nagaraj, Praneeth Netrapalli:
Streaming Linear System Identification with Reverse Experience Replay. NeurIPS 2021: 30140-30152 - [i54]Aadirupa Saha, Nagarajan Natarajan, Praneeth Netrapalli, Prateek Jain:
Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization. CoRR abs/2102.07387 (2021) - [i53]Harshay Shah, Prateek Jain, Praneeth Netrapalli:
Do Input Gradients Highlight Discriminative Features? CoRR abs/2102.12781 (2021) - [i52]Prateek Jain, Suhas S. Kowshik, Dheeraj Nagaraj, Praneeth Netrapalli:
Streaming Linear System Identification with Reverse Experience Replay. CoRR abs/2103.05896 (2021) - [i51]Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh:
Sample Efficient Linear Meta-Learning by Alternating Minimization. CoRR abs/2105.08306 (2021) - [i50]Prateek Jain, Suhas S. Kowshik, Dheeraj Nagaraj, Praneeth Netrapalli:
Near-optimal Offline and Streaming Algorithms for Learning Non-Linear Dynamical Systems. CoRR abs/2105.11558 (2021) - [i49]Tanner Fiez, Chi Jin, Praneeth Netrapalli, Lillian J. Ratliff:
Minimax Optimization with Smooth Algorithmic Adversaries. CoRR abs/2106.01488 (2021) - [i48]Vihari Piratla, Praneeth Netrapalli, Sunita Sarawagi:
Focus on the Common Good: Group Distributional Robustness Follows. CoRR abs/2110.02619 (2021) - [i47]Naman Agarwal, Syomantak Chaudhuri, Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli:
Online Target Q-learning with Reverse Experience Replay: Efficiently finding the Optimal Policy for Linear MDPs. CoRR abs/2110.08440 (2021) - [i46]Ankit Garg, Robin Kothari, Praneeth Netrapalli, Suhail Sherif:
Near-Optimal Lower Bounds For Convex Optimization For All Orders of Smoothness. CoRR abs/2112.01118 (2021) - 2020
- [c42]Vivek Gupta, Ankit Saw, Pegah Nokhiz, Praneeth Netrapalli, Piyush Rai, Partha P. Talukdar:
P-SIF: Document Embeddings Using Partition Averaging. AAAI 2020: 7863-7870 - [c41]Naman Agarwal, Sham M. Kakade, Rahul Kidambi, Yin Tat Lee, Praneeth Netrapalli, Aaron Sidford:
Leverage Score Sampling for Faster Accelerated Regression and ERM. ALT 2020: 22-47 - [c40]Arun Sai Suggala, Praneeth Netrapalli:
Online Non-Convex Learning: Following the Perturbed Leader is Optimal. ALT 2020: 845-861 - [c39]Chi Jin, Praneeth Netrapalli, Michael I. Jordan:
What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? ICML 2020: 4880-4889 - [c38]Vihari Piratla, Praneeth Netrapalli, Sunita Sarawagi:
Efficient Domain Generalization via Common-Specific Low-Rank Decomposition. ICML 2020: 7728-7738 - [c37]Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, Thorsten Joachims:
MOReL: Model-Based Offline Reinforcement Learning. NeurIPS 2020 - [c36]Dheeraj Nagaraj, Xian Wu, Guy Bresler, Prateek Jain, Praneeth Netrapalli:
Least Squares Regression with Markovian Data: Fundamental Limits and Algorithms. NeurIPS 2020 - [c35]Harshay Shah, Kaustav Tamuly, Aditi Raghunathan, Prateek Jain, Praneeth Netrapalli:
The Pitfalls of Simplicity Bias in Neural Networks. NeurIPS 2020 - [c34]Arun Sai Suggala, Praneeth Netrapalli:
Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games. NeurIPS 2020 - [c33]Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh:
Projection Efficient Subgradient Method and Optimal Nonsmooth Frank-Wolfe Method. NeurIPS 2020 - [i45]Vihari Piratla, Praneeth Netrapalli, Sunita Sarawagi:
Efficient Domain Generalization via Common-Specific Low-Rank Decomposition. CoRR abs/2003.12815 (2020) - [i44]Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, Thorsten Joachims:
MOReL : Model-Based Offline Reinforcement Learning. CoRR abs/2005.05951 (2020) - [i43]Vivek Gupta, Ankit Saw, Pegah Nokhiz, Praneeth Netrapalli, Piyush Rai, Partha P. Talukdar:
P-SIF: Document Embeddings Using Partition Averaging. CoRR abs/2005.09069 (2020) - [i42]Arun Sai Suggala, Praneeth Netrapalli:
Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games. CoRR abs/2006.07541 (2020) - [i41]Harshay Shah, Kaustav Tamuly, Aditi Raghunathan, Prateek Jain, Praneeth Netrapalli:
The Pitfalls of Simplicity Bias in Neural Networks. CoRR abs/2006.07710 (2020) - [i40]Guy Bresler, Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli, Xian Wu:
Least Squares Regression with Markovian Data: Fundamental Limits and Algorithms. CoRR abs/2006.08916 (2020) - [i39]Kartik Gupta, Arun Sai Suggala, Adarsh Prasad, Praneeth Netrapalli, Pradeep Ravikumar:
Learning Minimax Estimators via Online Learning. CoRR abs/2006.11430 (2020) - [i38]Ankit Garg, Robin Kothari, Praneeth Netrapalli, Suhail Sherif:
No quantum speedup over gradient descent for non-smooth convex optimization. CoRR abs/2010.01801 (2020) - [i37]Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh:
Projection Efficient Subgradient Method and Optimal Nonsmooth Frank-Wolfe Method. CoRR abs/2010.01848 (2020)
2010 – 2019
- 2019
- [c32]Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli:
Making the Last Iterate of SGD Information Theoretically Optimal. COLT 2019: 1752-1755 - [c31]Rong Ge, Prateek Jain, Sham M. Kakade, Rahul Kidambi, Dheeraj M. Nagaraj, Praneeth Netrapalli:
Open Problem: Do Good Algorithms Necessarily Query Bad Points? COLT 2019: 3190-3193 - [c30]Dheeraj Nagaraj, Prateek Jain, Praneeth Netrapalli:
SGD without Replacement: Sharper Rates for General Smooth Convex Functions. ICML 2019: 4703-4711 - [c29]Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh:
Efficient Algorithms for Smooth Minimax Optimization. NeurIPS 2019: 12659-12670 - [c28]Rong Ge, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli:
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure For Least Squares. NeurIPS 2019: 14951-14962 - [i36]Chi Jin, Praneeth Netrapalli, Michael I. Jordan:
Minmax Optimization: Stable Limit Points of Gradient Descent Ascent are Locally Optimal. CoRR abs/1902.00618 (2019) - [i35]Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M. Kakade, Michael I. Jordan:
A Short Note on Concentration Inequalities for Random Vectors with SubGaussian Norm. CoRR abs/1902.03736 (2019) - [i34]Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M. Kakade, Michael I. Jordan:
Stochastic Gradient Descent Escapes Saddle Points Efficiently. CoRR abs/1902.04811 (2019) - [i33]Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli:
SGD without Replacement: Sharper Rates for General Smooth Convex Functions. CoRR abs/1903.01463 (2019) - [i32]Arun Sai Suggala, Praneeth Netrapalli:
Online Non-Convex Learning: Following the Perturbed Leader is Optimal. CoRR abs/1903.08110 (2019) - [i31]Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli:
Making the Last Iterate of SGD Information Theoretically Optimal. CoRR abs/1904.12443 (2019) - [i30]Rong Ge, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli:
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure. CoRR abs/1904.12838 (2019) - [i29]Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh:
Efficient Algorithms for Smooth Minimax Optimization. CoRR abs/1907.01543 (2019) - [i28]Abhishek Panigrahi, Raghav Somani, Navin Goyal, Praneeth Netrapalli:
Non-Gaussianity of Stochastic Gradient Noise. CoRR abs/1910.09626 (2019) - 2018
- [c27]Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford:
Accelerating Stochastic Gradient Descent for Least Squares Regression. COLT 2018: 545-604 - [c26]Chi Jin, Praneeth Netrapalli, Michael I. Jordan:
Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent. COLT 2018: 1042-1085 - [c25]Srinadh Bhojanapalli, Nicolas Boumal, Prateek Jain, Praneeth Netrapalli:
Smoothed analysis for low-rank solutions to semidefinite programs in quadratic penalty form. COLT 2018: 3243-3270 - [c24]Rahul Kidambi, Praneeth Netrapalli, Prateek Jain, Sham M. Kakade:
On the insufficiency of existing momentum schemes for Stochastic Optimization. ICLR 2018 - [c23]Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, David P. Woodruff:
Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness. ITCS 2018: 8:1-8:21 - [c22]Rahul Kidambi, Praneeth Netrapalli, Prateek Jain, Sham M. Kakade:
On the Insufficiency of Existing Momentum Schemes for Stochastic Optimization. ITA 2018: 1-9 - [c21]Raghav Somani, Chirag Gupta, Prateek Jain, Praneeth Netrapalli:
Support Recovery for Orthogonal Matching Pursuit: Upper and Lower bounds. NeurIPS 2018: 10837-10847 - [i27]Srinadh Bhojanapalli, Nicolas Boumal, Prateek Jain, Praneeth Netrapalli:
Smoothed analysis for low-rank solutions to semidefinite programs in quadratic penalty form. CoRR abs/1803.00186 (2018) - [i26]Rahul Kidambi, Praneeth Netrapalli, Prateek Jain, Sham M. Kakade:
On the insufficiency of existing momentum schemes for Stochastic Optimization. CoRR abs/1803.05591 (2018) - 2017
- [j5]Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford:
Parallelizing Stochastic Gradient Descent for Least Squares Regression: Mini-batching, Averaging, and Model Misspecification. J. Mach. Learn. Res. 18: 223:1-223:42 (2017) - [j4]Alekh Agarwal, Animashree Anandkumar, Praneeth Netrapalli:
A Clustering Approach to Learning Sparsely Used Overcomplete Dictionaries. IEEE Trans. Inf. Theory 63(1): 575-592 (2017) - [c20]Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli:
Global Convergence of Non-Convex Gradient Descent for Computing Matrix Squareroot. AISTATS 2017: 479-488 - [c19]Yeshwanth Cherapanamjeri, Prateek Jain, Praneeth Netrapalli:
Thresholding Based Outlier Robust PCA. COLT 2017: 593-628 - [c18]Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Venkata Krishna Pillutla, Aaron Sidford:
A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares). FSTTCS 2017: 2:1-2:10 - [c17]Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan:
How to Escape Saddle Points Efficiently. ICML 2017: 1724-1732 - [i25]Yeshwanth Cherapanamjeri, Prateek Jain, Praneeth Netrapalli:
Thresholding based Efficient Outlier Robust PCA. CoRR abs/1702.05571 (2017) - [i24]Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan:
How to Escape Saddle Points Efficiently. CoRR abs/1703.00887 (2017) - [i23]Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, David P. Woodruff:
Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness. CoRR abs/1704.04163 (2017) - [i22]Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford:
Accelerating Stochastic Gradient Descent. CoRR abs/1704.08227 (2017) - [i21]Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Venkata Krishna Pillutla, Aaron Sidford:
A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares). CoRR abs/1710.09430 (2017) - [i20]Naman Agarwal, Sham M. Kakade, Rahul Kidambi, Yin Tat Lee, Praneeth Netrapalli, Aaron Sidford:
Leverage Score Sampling for Faster Accelerated Regression and ERM. CoRR abs/1711.08426 (2017) - [i19]Chi Jin, Praneeth Netrapalli, Michael I. Jordan:
Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent. CoRR abs/1711.10456 (2017) - 2016
- [j3]Jason K. Johnson, Diane Oyen, Michael Chertkov, Praneeth Netrapalli:
Learning Planar Ising Models. J. Mach. Learn. Res. 17: 215:1-215:26 (2016) - [j2]Alekh Agarwal, Animashree Anandkumar, Prateek Jain, Praneeth Netrapalli:
Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization. SIAM J. Optim. 26(4): 2775-2799 (2016) - [c16]Jess Banks, Cristopher Moore, Joe Neeman, Praneeth Netrapalli:
Information-theoretic thresholds for community detection in sparse networks. COLT 2016: 383-416 - [c15]Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford:
Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm. COLT 2016: 1147-1164 - [c14]Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli, Aaron Sidford:
Faster Eigenvector Computation via Shift-and-Invert Preconditioning. ICML 2016: 2626-2634 - [c13]Rong Ge, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford:
Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis. ICML 2016: 2741-2750 - [c12]Chi Jin, Sham M. Kakade, Praneeth Netrapalli:
Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent. NIPS 2016: 4520-4528 - [i18]Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford:
Matching Matrix Bernstein with Little Memory: Near-Optimal Finite Sample Guarantees for Oja's Algorithm. CoRR abs/1602.06929 (2016) - [i17]Rong Ge, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford:
Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis. CoRR abs/1604.03930 (2016) - [i16]Chi Jin, Sham M. Kakade, Praneeth Netrapalli:
Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent. CoRR abs/1605.08370 (2016) - [i15]Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli, Aaron Sidford:
Faster Eigenvector Computation via Shift-and-Invert Preconditioning. CoRR abs/1605.08754 (2016) - [i14]Jess Banks, Cristopher Moore, Joe Neeman, Praneeth Netrapalli:
Information-theoretic thresholds for community detection in sparse networks. CoRR abs/1607.01760 (2016) - [i13]Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford:
Parallelizing Stochastic Approximation Through Mini-Batching and Tail-Averaging. CoRR abs/1610.03774 (2016) - 2015
- [j1]Praneeth Netrapalli, Prateek Jain, Sujay Sanghavi:
Phase Retrieval Using Alternating Minimization. IEEE Trans. Signal Process. 63(18): 4814-4826 (2015) - [c11]Prateek Jain, Praneeth Netrapalli:
Fast Exact Matrix Completion with Finite Samples. COLT 2015: 1007-1034 - [c10]Kamalika Chaudhuri, Sham M. Kakade, Praneeth Netrapalli, Sujay Sanghavi:
Convergence Rates of Active Learning for Maximum Likelihood Estimation. NIPS 2015: 1090-1098 - [i12]Kamalika Chaudhuri, Sham M. Kakade, Praneeth Netrapalli, Sujay Sanghavi:
Convergence Rates of Active Learning for Maximum Likelihood Estimation. CoRR abs/1506.02348 (2015) - [i11]Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli:
Computing Matrix Squareroot via Non Convex Local Search. CoRR abs/1507.05854 (2015) - [i10]Chi Jin, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli, Aaron Sidford:
Robust Shift-and-Invert Preconditioning: Faster and More Sample Efficient Algorithms for Eigenvector Computation. CoRR abs/1510.08896 (2015) - 2014
- [c9]Alekh Agarwal, Animashree Anandkumar, Prateek Jain, Praneeth Netrapalli, Rashish Tandon:
Learning Sparsely Used Overcomplete Dictionaries. COLT 2014: 123-137 - [c8]Abhik Kumar Das, Praneeth Netrapalli, Sujay Sanghavi, Sriram Vishwanath:
Learning structure of power-law Markov networks. ISIT 2014: 2272-2276 - [c7]Praneeth Netrapalli, U. N. Niranjan, Sujay Sanghavi, Animashree Anandkumar, Prateek Jain:
Non-convex Robust PCA. NIPS 2014: 1107-1115 - [i9]Joe Neeman, Praneeth Netrapalli:
Non-Reconstructability in the Stochastic Block Model. CoRR abs/1404.6304 (2014) - [i8]Praneeth Netrapalli, U. N. Niranjan, Sujay Sanghavi, Animashree Anandkumar, Prateek Jain:
Non-convex Robust PCA. CoRR abs/1410.7660 (2014) - [i7]Prateek Jain, Praneeth Netrapalli:
Fast Exact Matrix Completion with Finite Samples. CoRR abs/1411.1087 (2014) - 2013
- [c6]Sivakanth Gopi, Praneeth Netrapalli, Prateek Jain, Aditya V. Nori:
One-Bit Compressed Sensing: Provable Support and Vector Recovery. ICML (3) 2013: 154-162 - [c5]Praneeth Netrapalli, Prateek Jain, Sujay Sanghavi:
Phase Retrieval using Alternating Minimization. NIPS 2013: 2796-2804 - [c4]