default search action
Rong Ge 0001
Person information
- affiliation: Duke University, Durham, NC, USA
- affiliation (former): Princeton University, NJ, USA
Other persons with the same name
- Rong Ge — disambiguation page
- Rong Ge 0002 — Clemson University, School of Computing, SC, USA
- Rong Ge 0003 — Marquette University, Milwaukee, WI, USA
- Rong Ge 0005 — Purdue University, West Lafayette, IN, USA
- Rong Ge 0006 — Chinese Academy of Sciences, Institute of Geographic Sciences and Natural Resources Research, Key Laboratory of Ecosystem Network Observation and Modeling, Beijing, China
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [c76]Muthu Chidambaram, Rong Ge:
On the Limitations of Temperature Scaling for Distributions with Overlaps. ICLR 2024 - [i85]Muthu Chidambaram, Rong Ge:
For Better or For Worse? Learning Minimum Variance Features With Label Augmentation. CoRR abs/2402.06855 (2024) - [i84]Shuyao Li, Yu Cheng, Ilias Diakonikolas, Jelena Diakonikolas, Rong Ge, Stephen J. Wright:
Robust Second-Order Nonconvex Optimization and Its Application to Low Rank Matrix Sensing. CoRR abs/2403.10547 (2024) - [i83]Mo Zhou, Rong Ge:
How Does Gradient Descent Learn Features - A Local Analysis for Regularized Two-Layer Neural Networks. CoRR abs/2406.01766 (2024) - [i82]Muthu Chidambaram, Rong Ge:
Reassessing How to Compare and Improve the Calibration of Machine Learning Models. CoRR abs/2406.04068 (2024) - 2023
- [c75]Haoyu Zhao, Abhishek Panigrahi, Rong Ge, Sanjeev Arora:
Do Transformers Parse while Predicting the Masked Word? EMNLP 2023: 16513-16542 - [c74]Chenwei Wu, Li Erran Li, Stefano Ermon, Patrick Haffner, Rong Ge, Zaiwei Zhang:
The Role of Linguistic Priors in Measuring Compositional Generalization of Vision-Language Models. ICBINB 2023: 118-126 - [c73]Xingyu Zhu, Zixuan Wang, Xiang Wang, Mo Zhou, Rong Ge:
Understanding Edge-of-Stability Training Dynamics with a Minimalist Example. ICLR 2023 - [c72]Xiang Wang, Annie N. Wang, Mo Zhou, Rong Ge:
Plateau in Monotonic Linear Interpolation - A "Biased" View of Loss Landscape for Deep Networks. ICLR 2023 - [c71]Zeping Luo, Shiyou Wu, Cindy Weng, Mo Zhou, Rong Ge:
Understanding The Robustness of Self-supervised Learning Through Topic Modeling. ICLR 2023 - [c70]Yunwei Ren, Mo Zhou, Rong Ge:
Depth Separation with Multilayer Mean-Field Networks. ICLR 2023 - [c69]Muthu Chidambaram, Xiang Wang, Chenwei Wu, Rong Ge:
Provably Learning Diverse Features in Multi-View Data with Midpoint Mixup. ICML 2023: 5563-5599 - [c68]Muthu Chidambaram, Chenwei Wu, Yu Cheng, Rong Ge:
Hiding Data Helps: On the Benefits of Masking for Sparse Coding. ICML 2023: 5600-5615 - [c67]Mo Zhou, Rong Ge:
Implicit Regularization Leads to Benign Overfitting for Sparse Linear Regression. ICML 2023: 42543-42573 - [c66]Chenwei Wu, Holden Lee, Rong Ge:
Connecting Pre-trained Language Model and Downstream Task via Properties of Representation. NeurIPS 2023 - [c65]Alex Damian, Eshaan Nichani, Rong Ge, Jason D. Lee:
Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample Complexity for Learning Single Index Models. NeurIPS 2023 - [c64]Shuyao Li, Yu Cheng, Ilias Diakonikolas, Jelena Diakonikolas, Rong Ge, Stephen J. Wright:
Robust Second-Order Nonconvex Optimization and Its Application to Low Rank Matrix Sensing. NeurIPS 2023 - [i81]Mo Zhou, Rong Ge:
Implicit Regularization Leads to Benign Overfitting for Sparse Linear Regression. CoRR abs/2302.00257 (2023) - [i80]Muthu Chidambaram, Chenwei Wu, Yu Cheng, Rong Ge:
Hiding Data Helps: On the Benefits of Masking for Sparse Coding. CoRR abs/2302.12715 (2023) - [i79]Haoyu Zhao, Abhishek Panigrahi, Rong Ge, Sanjeev Arora:
Do Transformers Parse while Predicting the Masked Word? CoRR abs/2303.08117 (2023) - [i78]Yunwei Ren, Mo Zhou, Rong Ge:
Depth Separation with Multilayer Mean-Field Networks. CoRR abs/2304.01063 (2023) - [i77]Alex Damian, Eshaan Nichani, Rong Ge, Jason D. Lee:
Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample Complexity for Learning Single Index Models. CoRR abs/2305.10633 (2023) - [i76]Muthu Chidambaram, Rong Ge:
A Uniform Confidence Phenomenon in Deep Learning and its Implications for Calibration. CoRR abs/2306.00740 (2023) - [i75]Chenwei Wu, Li Erran Li, Stefano Ermon, Patrick Haffner, Rong Ge, Zaiwei Zhang:
The Role of Linguistic Priors in Measuring Compositional Generalization of Vision-Language Models. CoRR abs/2310.02777 (2023) - 2022
- [j14]Abraham Frandsen, Rong Ge:
Optimization landscape of Tucker decomposition. Math. Program. 193(2): 687-712 (2022) - [j13]Rong Ge, Tengyu Ma:
On the optimization landscape of tensor decompositions. Math. Program. 193(2): 713-759 (2022) - [c63]Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge:
Towards Understanding the Data Dependency of Mixup-style Training. ICLR 2022 - [c62]Keerti Anand, Rong Ge, Amit Kumar, Debmalya Panigrahi:
Online Algorithms with Multiple Predictions. ICML 2022: 582-598 - [c61]Abraham Frandsen, Rong Ge, Holden Lee:
Extracting Latent State Representations with Linear Dynamics from Rich Observations. ICML 2022: 6705-6725 - [c60]Yu Cheng, Ilias Diakonikolas, Rong Ge, Shivam Gupta, Daniel Kane, Mahdi Soltanolkotabi:
Outlier-Robust Sparse Estimation via Non-Convex Optimization. NeurIPS 2022 - [i74]Zeping Luo, Cindy Weng, Shiyou Wu, Mo Zhou, Rong Ge:
One Objective for All Models - Self-supervised Learning for Topic Models. CoRR abs/2203.03539 (2022) - [i73]Keerti Anand, Rong Ge, Amit Kumar, Debmalya Panigrahi:
Online Algorithms with Multiple Predictions. CoRR abs/2205.03921 (2022) - [i72]Keerti Anand, Rong Ge, Debmalya Panigrahi:
Customizing ML Predictions for Online Algorithms. CoRR abs/2205.08715 (2022) - [i71]Keerti Anand, Rong Ge, Amit Kumar, Debmalya Panigrahi:
A Regression Approach to Learning-Augmented Online Algorithms. CoRR abs/2205.08717 (2022) - [i70]Xiang Wang, Annie N. Wang, Mo Zhou, Rong Ge:
Plateau in Monotonic Linear Interpolation - A "Biased" View of Loss Landscape for Deep Networks. CoRR abs/2210.01019 (2022) - [i69]Xingyu Zhu, Zixuan Wang, Xiang Wang, Mo Zhou, Rong Ge:
Understanding Edge-of-Stability Training Dynamics with a Minimalist Example. CoRR abs/2210.03294 (2022) - [i68]Muthu Chidambaram, Xiang Wang, Chenwei Wu, Rong Ge:
Provably Learning Diverse Features in Multi-View Data with Midpoint Mixup. CoRR abs/2210.13512 (2022) - 2021
- [j12]Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M. Kakade, Michael I. Jordan:
On Nonconvex Optimization for Machine Learning: Gradients, Stochasticity, and Saddle Points. J. ACM 68(2): 11:1-11:29 (2021) - [j11]Yossi Azar, Arun Ganesh, Rong Ge, Debmalya Panigrahi:
Online Service with Delay. ACM Trans. Algorithms 17(3): 23:1-23:31 (2021) - [c59]Rong Ge, Holden Lee, Jianfeng Lu, Andrej Risteski:
Efficient sampling from the Bingham distribution. ALT 2021: 673-685 - [c58]Mo Zhou, Rong Ge, Chi Jin:
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network. COLT 2021: 4577-4632 - [c57]Xiang Wang, Shuai Yuan, Chenwei Wu, Rong Ge:
Guarantees for Tuning the Step Size using a Learning-to-Learn Approach. ICML 2021: 10981-10990 - [c56]Rong Ge, Yunwei Ren, Xiang Wang, Mo Zhou:
Understanding Deflation Process in Over-parametrized Tensor Decomposition. NeurIPS 2021: 1299-1311 - [c55]Keerti Anand, Rong Ge, Amit Kumar, Debmalya Panigrahi:
A Regression Approach to Learning-Augmented Online Algorithms. NeurIPS 2021: 30504-30517 - [i67]Mo Zhou, Rong Ge, Chi Jin:
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network. CoRR abs/2102.02410 (2021) - [i66]Rong Ge, Yunwei Ren, Xiang Wang, Mo Zhou:
Understanding Deflation Process in Over-parametrized Tensor Decomposition. CoRR abs/2106.06573 (2021) - [i65]Yu Cheng, Ilias Diakonikolas, Daniel M. Kane, Rong Ge, Shivam Gupta, Mahdi Soltanolkotabi:
Outlier-Robust Sparse Estimation via Non-Convex Optimization. CoRR abs/2109.11515 (2021) - [i64]Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge:
Towards Understanding the Data Dependency of Mixup-style Training. CoRR abs/2110.07647 (2021) - 2020
- [c54]Keerti Anand, Rong Ge, Debmalya Panigrahi:
Customizing ML Predictions for Online Algorithms. ICML 2020: 303-313 - [c53]Yu Cheng, Ilias Diakonikolas, Rong Ge, Mahdi Soltanolkotabi:
High-dimensional Robust Mean Estimation via Gradient Descent. ICML 2020: 1768-1778 - [c52]Xiang Wang, Chenwei Wu, Jason D. Lee, Tengyu Ma, Rong Ge:
Beyond Lazy Training for Over-parameterized Tensor Decomposition. NeurIPS 2020 - [c51]Rong Ge, Holden Lee, Jianfeng Lu:
Estimating normalizing constants for log-concave distributions: algorithms and lower bounds. STOC 2020: 579-586 - [p1]Rong Ge, Ankur Moitra:
Topic Models and Nonnegative Matrix Factorization. Beyond the Worst-Case Analysis of Algorithms 2020: 445-464 - [i63]Majid Janzamin, Rong Ge, Jean Kossaifi, Anima Anandkumar:
Spectral Learning on Matrices and Tensors. CoRR abs/2004.07984 (2020) - [i62]Yu Cheng, Ilias Diakonikolas, Rong Ge, Mahdi Soltanolkotabi:
High-Dimensional Robust Mean Estimation via Gradient Descent. CoRR abs/2005.01378 (2020) - [i61]Abraham Frandsen, Rong Ge:
Extracting Latent State Representations with Linear Dynamics from Rich Observations. CoRR abs/2006.16128 (2020) - [i60]Abraham Frandsen, Rong Ge:
Optimization Landscape of Tucker Decomposition. CoRR abs/2006.16297 (2020) - [i59]Xiang Wang, Shuai Yuan, Chenwei Wu, Rong Ge:
Guarantees for Tuning the Step Size using a Learning-to-Learn Approach. CoRR abs/2006.16495 (2020) - [i58]Rong Ge, Holden Lee, Jianfeng Lu, Andrej Risteski:
Efficient sampling from the Bingham distribution. CoRR abs/2010.00137 (2020) - [i57]Yikai Wu, Xingyu Zhu, Chenwei Wu, Annie N. Wang, Rong Ge:
Dissecting Hessian: Understanding Common Structure of Hessian in Neural Networks. CoRR abs/2010.04261 (2020) - [i56]Xiang Wang, Chenwei Wu, Jason D. Lee, Tengyu Ma, Rong Ge:
Beyond Lazy Training for Over-parameterized Tensor Decomposition. CoRR abs/2010.11356 (2020)
2010 – 2019
- 2019
- [j10]Majid Janzamin, Rong Ge, Jean Kossaifi, Anima Anandkumar:
Spectral Learning on Matrices and Tensors. Found. Trends Mach. Learn. 12(5-6): 393-536 (2019) - [c50]Yu Cheng, Ilias Diakonikolas, Rong Ge, David P. Woodruff:
Faster Algorithms for High-Dimensional Robust Covariance Estimation. COLT 2019: 727-757 - [c49]Rong Ge, Zhize Li, Weiyao Wang, Xiang Wang:
Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization. COLT 2019: 1394-1448 - [c48]Rong Ge, Prateek Jain, Sham M. Kakade, Rahul Kidambi, Dheeraj M. Nagaraj, Praneeth Netrapalli:
Open Problem: Do Good Algorithms Necessarily Query Bad Points? COLT 2019: 3190-3193 - [c47]Abraham Frandsen, Rong Ge:
Understanding Composition of Word Embeddings via Tensor Decomposition. ICLR (Poster) 2019 - [c46]Rong Ge, Rohith Kuditipudi, Zhize Li, Xiang Wang:
Learning Two-layer Neural Networks with Symmetric Inputs. ICLR (Poster) 2019 - [c45]Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Rong Ge, Sanjeev Arora:
Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets. NeurIPS 2019: 14574-14583 - [c44]Rong Ge, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli:
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure For Least Squares. NeurIPS 2019: 14951-14962 - [c43]Yu Cheng, Ilias Diakonikolas, Rong Ge:
High-Dimensional Robust Mean Estimation in Nearly-Linear Time. SODA 2019: 2755-2771 - [i55]Abraham Frandsen, Rong Ge:
Understanding Composition of Word Embeddings via Tensor Decomposition. CoRR abs/1902.00613 (2019) - [i54]Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M. Kakade, Michael I. Jordan:
A Short Note on Concentration Inequalities for Random Vectors with SubGaussian Norm. CoRR abs/1902.03736 (2019) - [i53]Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M. Kakade, Michael I. Jordan:
Stochastic Gradient Descent Escapes Saddle Points Efficiently. CoRR abs/1902.04811 (2019) - [i52]Rong Ge, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli:
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure. CoRR abs/1904.12838 (2019) - [i51]Rong Ge, Zhize Li, Weiyao Wang, Xiang Wang:
Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization. CoRR abs/1905.00529 (2019) - [i50]Yu Cheng, Ilias Diakonikolas, Rong Ge, David P. Woodruff:
Faster Algorithms for High-Dimensional Robust Covariance Estimation. CoRR abs/1906.04661 (2019) - [i49]Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Sanjeev Arora, Rong Ge:
Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets. CoRR abs/1906.06247 (2019) - [i48]Rong Ge, Runzhe Wang, Haoyu Zhao:
Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently. CoRR abs/1909.11837 (2019) - [i47]Rong Ge, Holden Lee, Jianfeng Lu:
Estimating Normalizing Constants for Log-Concave Distributions: Algorithms and Lower Bounds. CoRR abs/1911.03043 (2019) - 2018
- [j9]Sanjeev Arora, Rong Ge, Yoni Halpern, David M. Mimno, Ankur Moitra, David A. Sontag, Yichen Wu, Michael Zhu:
Learning topic models - provably and efficiently. Commun. ACM 61(4): 85-93 (2018) - [c42]Yu Cheng, Rong Ge:
Non-Convex Matrix Completion Against a Semi-Random Adversary. COLT 2018: 1362-1394 - [c41]Rong Ge, Jason D. Lee, Tengyu Ma:
Learning One-hidden-layer Neural Networks with Landscape Design. ICLR (Poster) 2018 - [c40]Sanjeev Arora, Rong Ge, Behnam Neyshabur, Yi Zhang:
Stronger Generalization Bounds for Deep Nets via a Compression Approach. ICML 2018: 254-263 - [c39]Maryam Fazel, Rong Ge, Sham M. Kakade, Mehran Mesbahi:
Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator. ICML 2018: 1466-1475 - [c38]Chi Jin, Lydia T. Liu, Rong Ge, Michael I. Jordan:
On the Local Minima of the Empirical Risk. NeurIPS 2018: 4901-4910 - [c37]Holden Lee, Andrej Risteski, Rong Ge:
Beyond Log-concavity: Provable Guarantees for Sampling Multi-modal Distributions using Simulated Tempering Langevin Monte Carlo. NeurIPS 2018: 7858-7867 - [i46]Maryam Fazel, Rong Ge, Sham M. Kakade, Mehran Mesbahi:
Global Convergence of Policy Gradient Methods for Linearized Control Problems. CoRR abs/1801.05039 (2018) - [i45]Sanjeev Arora, Rong Ge, Behnam Neyshabur, Yi Zhang:
Stronger generalization bounds for deep nets via a compression approach. CoRR abs/1802.05296 (2018) - [i44]Chi Jin, Lydia T. Liu, Rong Ge, Michael I. Jordan:
Minimizing Nonconvex Population Risk from Rough Empirical Risk. CoRR abs/1803.09357 (2018) - [i43]Yu Cheng, Rong Ge:
Non-Convex Matrix Completion Against a Semi-Random Adversary. CoRR abs/1803.10846 (2018) - [i42]Rong Ge, Rohith Kuditipudi, Zhize Li, Xiang Wang:
Learning Two-layer Neural Networks with Symmetric Inputs. CoRR abs/1810.06793 (2018) - [i41]Yu Cheng, Ilias Diakonikolas, Rong Ge:
High-Dimensional Robust Mean Estimation in Nearly-Linear Time. CoRR abs/1811.09380 (2018) - [i40]Rong Ge, Holden Lee, Andrej Risteski:
Simulated Tempering Langevin Monte Carlo II: An Improved Proof using Soft Markov Chain Decomposition. CoRR abs/1812.00793 (2018) - 2017
- [j8]Animashree Anandkumar, Rong Ge, Majid Janzamin:
Analyzing Tensor Power Method Dynamics in Overcomplete Regime. J. Mach. Learn. Res. 18: 22:1-22:40 (2017) - [c36]Anima Anandkumar, Yuan Deng, Rong Ge, Hossein Mobahi:
Homotopy Analysis for Tensor PCA. COLT 2017: 79-104 - [c35]Holden Lee, Rong Ge, Tengyu Ma, Andrej Risteski, Sanjeev Arora:
On the Ability of Neural Nets to Express Distributions. COLT 2017: 1271-1296 - [c34]Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, Yi Zhang:
Generalization and Equilibrium in Generative Adversarial Nets (GANs). ICML 2017: 224-232 - [c33]Rong Ge, Chi Jin, Yi Zheng:
No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis. ICML 2017: 1233-1242 - [c32]Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan:
How to Escape Saddle Points Efficiently. ICML 2017: 1724-1732 - [c31]Rong Ge, Tengyu Ma:
On the Optimization Landscape of Tensor Decompositions. NIPS 2017: 3653-3663 - [c30]Yossi Azar, Arun Ganesh, Rong Ge, Debmalya Panigrahi:
Online service with delay. STOC 2017: 551-563 - [c29]Sanjeev Arora, Rong Ge, Tengyu Ma, Andrej Risteski:
Provable learning of noisy-OR networks. STOC 2017: 1057-1066 - [i39]Holden Lee, Rong Ge, Andrej Risteski, Tengyu Ma, Sanjeev Arora:
On the ability of neural nets to express distributions. CoRR abs/1702.07028 (2017) - [i38]Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, Yi Zhang:
Generalization and Equilibrium in Generative Adversarial Nets (GANs). CoRR abs/1703.00573 (2017) - [i37]Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan:
How to Escape Saddle Points Efficiently. CoRR abs/1703.00887 (2017) - [i36]Rong Ge, Chi Jin, Yi Zheng:
No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis. CoRR abs/1704.00708 (2017) - [i35]Rong Ge, Tengyu Ma:
On the Optimization Landscape of Tensor Decompositions. CoRR abs/1706.05598 (2017) - [i34]Yossi Azar, Arun Ganesh, Rong Ge, Debmalya Panigrahi:
Online Service with Delay. CoRR abs/1708.05611 (2017) - [i33]Rong Ge, Holden Lee, Andrej Risteski:
Beyond Log-concavity: Provable Guarantees for Sampling Multi-modal Distributions using Simulated Tempering Langevin Monte Carlo. CoRR abs/1710.02736 (2017) - [i32]Rong Ge, Jason D. Lee, Tengyu Ma:
Learning One-hidden-layer Neural Networks with Landscape Design. CoRR abs/1711.00501 (2017) - 2016
- [j7]Sanjeev Arora, Rong Ge, Ravi Kannan, Ankur Moitra:
Computing a Nonnegative Matrix Factorization - Provably. SIAM J. Comput. 45(4): 1582-1611 (2016) - [j6]Qingqing Huang, Rong Ge, Sham M. Kakade, Munther A. Dahleh:
Minimal Realization Problems for Hidden Markov Models. IEEE Trans. Signal Process. 64(7): 1896-1904 (2016) - [c28]Animashree Anandkumar, Rong Ge:
Efficient approaches for escaping higher order saddle points in non-convex optimization. COLT 2016: 81-102 - [c27]Rong Ge, James Zou:
Rich Component Analysis. ICML 2016: 1502-1510 - [c26]Rong Ge, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford:
Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis. ICML 2016: 2741-2750 - [c25]Sanjeev Arora, Rong Ge, Frederic Koehler, Tengyu Ma, Ankur Moitra:
Provable Algorithms for Inference in Topic Models. ICML 2016: 2859-2867 - [c24]Rong Ge, Jason D. Lee, Tengyu Ma:
Matrix Completion has No Spurious Local Minimum. NIPS 2016: 2973-2981 - [i31]Anima Anandkumar, Rong Ge:
Efficient approaches for escaping higher order saddle points in non-convex optimization. CoRR abs/1602.05908 (2016) - [i30]Rong Ge, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford:
Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis. CoRR abs/1604.03930 (2016) - [i29]Rong Ge, Jason D. Lee, Tengyu Ma:
Matrix Completion has No Spurious Local Minimum. CoRR abs/1605.07272 (2016) - [i28]Sanjeev Arora, Rong Ge, Frederic Koehler, Tengyu Ma, Ankur Moitra:
Provable Algorithms for Inference in Topic Models. CoRR abs/1605.08491 (2016) - [i27]Anima Anandkumar, Yuan Deng, Rong Ge, Hossein Mobahi:
Homotopy Method for Tensor Principal Component Analysis. CoRR abs/1610.09322 (2016) - [i26]Sanjeev Arora, Rong Ge, Tengyu Ma, Andrej Risteski:
Provable learning of Noisy-or Networks. CoRR abs/1612.08795 (2016) - 2015
- [j5]Sanjeev Arora, Rong Ge, Ankur Moitra, Sushant Sachdeva:
Provable ICA with Unknown Gaussian Noise, and Implications for Gaussian Mixtures and Autoencoders. Algorithmica 72(1): 215-236 (2015) - [c23]Anima Anandkumar, Rong Ge, Daniel J. Hsu, Sham M. Kakade, Matus Telgarsky:
Tensor Decompositions for Learning Latent Variable Models (A Survey for ALT). ALT 2015: 19-38 - [c22]Rong Ge, Tengyu Ma:
Decomposing Overcomplete 3rd Order Tensors using Sum-of-Squares Algorithms. APPROX-RANDOM 2015: 829-849 - [c21]Animashree Anandkumar, Rong Ge, Majid Janzamin:
Learning Overcomplete Latent Variable Models through Tensor Methods. COLT 2015: 36-112 - [c20]