


default search action
Chi Jin 0001
Person information
- affiliation: Princeton University, Department of Electrical Engineering, NJ, USA
- affiliation (PhD 2019): University of California, Berkeley, USA
Other persons with the same name
- Chi Jin — disambiguation page
- Chi Jin 0002
— V-Flow Tech Pte. Ltd., Singapore (and 1 more)
- Chi Jin 0003
— CentraleSupélec, CNRS, France (and 1 more)
- Chi Jin 0004
— Jilin University, College of Communication Engineering, Changchun, China
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j4]Chi Jin, Qinghua Liu
, Yuanhao Wang
, Tiancheng Yu:
V-Learning - A Simple, Efficient, Decentralized Algorithm for Multiagent Reinforcement Learning. Math. Oper. Res. 49(4): 2295-2322 (2024) - [c61]Mahsa Bastankhah, Viraj Nadkarni, Chi Jin, Sanjeev Kulkarni, Pramod Viswanath:
Thinking Fast and Slow: Data-Driven Adaptive DeFi Borrow-Lending Protocol. AFT 2024: 27:1-27:23 - [c60]Zihan Ding, Chi Jin:
Consistency Models as a Rich and Efficient Policy Class for Reinforcement Learning. ICLR 2024 - [c59]Jiawei Ge, Shange Tang, Jianqing Fan, Chi Jin:
On the Provable Advantage of Unsupervised Pretraining. ICLR 2024 - [c58]Jiawei Ge, Shange Tang, Jianqing Fan, Cong Ma, Chi Jin:
Maximum Likelihood Estimation is All You Need for Well-Specified Covariate Shift. ICLR 2024 - [c57]Ahmed Khaled, Chi Jin:
Tuning-Free Stochastic Optimization. ICML 2024 - [c56]Wenzhe Li, Zihan Ding, Seth Karten, Chi Jin:
FightLadder: A Benchmark for Competitive Multi-Agent Reinforcement Learning. ICML 2024 - [i73]Ahmed Khaled, Chi Jin:
Tuning-Free Stochastic Optimization. CoRR abs/2402.07793 (2024) - [i72]Wenzhe Li, Zihan Ding, Seth Karten, Chi Jin:
FightLadder: A Benchmark for Competitive Multi-Agent Reinforcement Learning. CoRR abs/2406.02081 (2024) - [i71]Jiachen Hu, Qinghua Liu, Chi Jin:
On Limitation of Transformer for Learning HMMs. CoRR abs/2406.04089 (2024) - [i70]Jiawei Ge, Yuanhao Wang, Wenzhe Li, Chi Jin:
Towards Principled Superhuman AI for Multiplayer Symmetric Games. CoRR abs/2406.04201 (2024) - [i69]Mahsa Bastankhah, Viraj Nadkarni, Xuechao Wang, Chi Jin, Sanjeev Kulkarni, Pramod Viswanath:
Thinking Fast and Slow: Data-Driven Adaptive DeFi Borrow-Lending Protocol. CoRR abs/2407.10890 (2024) - [i68]Tianyi Lin, Chi Jin, Michael I. Jordan:
Two-Timescale Gradient Descent Ascent Algorithms for Nonconvex Minimax Optimization. CoRR abs/2408.11974 (2024) - [i67]Shange Tang, Jiayun Wu, Jianqing Fan, Chi Jin:
Benign Overfitting in Out-of-Distribution Generalization of Linear Models. CoRR abs/2412.14474 (2024) - [i66]Zihan Ding, Chi Jin:
Generative Diffusion Modeling: A Practical Handbook. CoRR abs/2412.17162 (2024) - 2023
- [j3]Chi Jin
, Zhuoran Yang
, Zhaoran Wang
, Michael I. Jordan
:
Provably Efficient Reinforcement Learning with Linear Function Approximation. Math. Oper. Res. 48(3): 1496-1521 (2023) - [c55]Yuanhao Wang, Qinghua Liu, Yu Bai, Chi Jin:
Breaking the Curse of Multiagency: Provably Efficient Decentralized Multi-Agent RL with Function Approximation. COLT 2023: 2793-2848 - [c54]Ahmed Khaled, Chi Jin:
Faster federated optimization under second-order similarity. ICLR 2023 - [c53]Yuanhao Wang, Dingwen Kong, Yu Bai, Chi Jin:
Learning Rationalizable Equilibria in Multiplayer Games. ICLR 2023 - [c52]Jiachen Hu, Han Zhong, Chi Jin, Liwei Wang:
Provable Sim-to-real Transfer in Continuous Domain with Partial Observations. ICLR 2023 - [c51]Chengzhuo Ni, Yuda Song, Xuezhou Zhang, Zihan Ding, Chi Jin, Mengdi Wang:
Representation Learning for Low-rank General-sum Markov Games. ICLR 2023 - [c50]Hadi Daneshmand, Jason D. Lee, Chi Jin:
Efficient displacement convex optimization with particle gradient descent. ICML 2023: 6836-6854 - [c49]Yuanhao Wang, Qinghua Liu, Chi Jin:
Is RLHF More Difficult than Standard RL? A Theoretical Perspective. NeurIPS 2023 - [c48]Ahmed Khaled, Konstantin Mishchenko, Chi Jin:
DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method. NeurIPS 2023 - [c47]Chung-Wei Lee, Qinghua Liu, Yasin Abbasi-Yadkori, Chi Jin, Tor Lattimore, Csaba Szepesvári:
Context-lumpable stochastic bandits. NeurIPS 2023 - [c46]Qinghua Liu, Gellért Weisz, András György, Chi Jin, Csaba Szepesvári:
Optimistic Natural Policy Gradient: a Simple Efficient Policy Optimization Framework for Online RL. NeurIPS 2023 - [c45]Qinghua Liu, Praneeth Netrapalli, Csaba Szepesvári, Chi Jin:
Optimistic MLE: A Generic Model-Based Algorithm for Partially Observable Sequential Decision Making. STOC 2023: 363-376 - [i65]Hadi Daneshmand, Jason D. Lee, Chi Jin:
Efficient displacement convex optimization with particle gradient descent. CoRR abs/2302.04753 (2023) - [i64]Yuanhao Wang, Qinghua Liu, Yu Bai, Chi Jin:
Breaking the Curse of Multiagency: Provably Efficient Decentralized Multi-Agent RL with Function Approximation. CoRR abs/2302.06606 (2023) - [i63]Jiawei Ge, Shange Tang, Jianqing Fan, Chi Jin:
On the Provable Advantage of Unsupervised Pretraining. CoRR abs/2303.01566 (2023) - [i62]Zihan Ding, Yuanpei Chen, Allen Z. Ren, Shixiang Shane Gu, Hao Dong, Chi Jin:
Learning a Universal Human Prior for Dexterous Manipulation from Human Preference. CoRR abs/2304.04602 (2023) - [i61]Qinghua Liu, Gellért Weisz, András György, Chi Jin, Csaba Szepesvári:
Optimistic Natural Policy Gradient: a Simple Efficient Policy Optimization Framework for Online RL. CoRR abs/2305.11032 (2023) - [i60]Ahmed Khaled, Konstantin Mishchenko, Chi Jin:
DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method. CoRR abs/2305.16284 (2023) - [i59]Chung-Wei Lee, Qinghua Liu, Yasin Abbasi-Yadkori, Chi Jin, Tor Lattimore, Csaba Szepesvári:
Context-lumpable stochastic bandits. CoRR abs/2306.13053 (2023) - [i58]Yuanhao Wang, Qinghua Liu, Chi Jin:
Is RLHF More Difficult than Standard RL? CoRR abs/2306.14111 (2023) - [i57]Zihan Ding, Chi Jin:
Consistency Models as a Rich and Efficient Policy Class for Reinforcement Learning. CoRR abs/2309.16984 (2023) - [i56]Viraj Nadkarni, Jiachen Hu, Ranvir Rana, Chi Jin, Sanjeev R. Kulkarni, Pramod Viswanath:
ZeroSwap: Data-driven Optimal Market Making in DeFi. CoRR abs/2310.09413 (2023) - [i55]Jiawei Ge, Shange Tang, Jianqing Fan, Cong Ma, Chi Jin:
Maximum Likelihood Estimation is All You Need for Well-Specified Covariate Shift. CoRR abs/2311.15961 (2023) - 2022
- [c44]Qinghua Liu, Alan Chung, Csaba Szepesvári, Chi Jin:
When Is Partially Observable Reinforcement Learning Not Scary? COLT 2022: 5175-5220 - [c43]Xiaoyu Chen, Jiachen Hu, Chi Jin, Lihong Li, Liwei Wang:
Understanding Domain Randomization for Sim-to-real Transfer. ICLR 2022 - [c42]Tanner Fiez, Chi Jin, Praneeth Netrapalli, Lillian J. Ratliff:
Minimax Optimization with Smooth Algorithmic Adversaries. ICLR 2022 - [c41]Yu Bai, Chi Jin, Song Mei, Tiancheng Yu:
Near-Optimal Learning of Extensive-Form Games with Imperfect Information. ICML 2022: 1337-1382 - [c40]Yonathan Efroni, Chi Jin, Akshay Krishnamurthy, Sobhan Miryoosefi:
Provable Reinforcement Learning with a Short-Term Memory. ICML 2022: 5832-5850 - [c39]Chi Jin
, Qinghua Liu, Tiancheng Yu:
The Power of Exploiter: Provable Multi-Agent RL in Large State Spaces. ICML 2022: 10251-10279 - [c38]Qinghua Liu, Yuanhao Wang, Chi Jin:
Learning Markov Games with Adversarial Opponents: Efficient Algorithms and Fundamental Limits. ICML 2022: 14036-14053 - [c37]Sobhan Miryoosefi, Chi Jin:
A Simple Reward-free Approach to Constrained Reinforcement Learning. ICML 2022: 15666-15698 - [c36]Yu Bai, Chi Jin, Song Mei, Ziang Song, Tiancheng Yu:
Efficient Phi-Regret Minimization in Extensive-Form Games via Online Mirror Descent. NeurIPS 2022 - [c35]Qinghua Liu, Csaba Szepesvári, Chi Jin:
Sample-Efficient Reinforcement Learning of Partially Observable Markov Games. NeurIPS 2022 - [i54]Yu Bai, Chi Jin, Song Mei, Tiancheng Yu:
Near-Optimal Learning of Extensive-Form Games with Imperfect Information. CoRR abs/2202.01752 (2022) - [i53]Yonathan Efroni, Chi Jin, Akshay Krishnamurthy, Sobhan Miryoosefi:
Provable Reinforcement Learning with a Short-Term Memory. CoRR abs/2202.03983 (2022) - [i52]Qinghua Liu, Yuanhao Wang, Chi Jin:
Learning Markov Games with Adversarial Opponents: Efficient Algorithms and Fundamental Limits. CoRR abs/2203.06803 (2022) - [i51]Qinghua Liu, Alan Chung, Csaba Szepesvári, Chi Jin:
When Is Partially Observable Reinforcement Learning Not Scary? CoRR abs/2204.08967 (2022) - [i50]Yu Bai, Chi Jin, Song Mei, Ziang Song, Tiancheng Yu:
Efficient Φ-Regret Minimization in Extensive-Form Games via Online Mirror Descent. CoRR abs/2205.15294 (2022) - [i49]Qinghua Liu, Csaba Szepesvári, Chi Jin:
Sample-Efficient Reinforcement Learning of Partially Observable Markov Games. CoRR abs/2206.01315 (2022) - [i48]Zihan Ding, Dijia Su, Qinghua Liu, Chi Jin:
A Deep Reinforcement Learning Approach for Finding Non-Exploitable Strategies in Two-Player Atari Games. CoRR abs/2207.08894 (2022) - [i47]Ahmed Khaled, Chi Jin:
Faster federated optimization under second-order similarity. CoRR abs/2209.02257 (2022) - [i46]Qinghua Liu, Praneeth Netrapalli, Csaba Szepesvári, Chi Jin:
Optimistic MLE - A Generic Model-based Algorithm for Partially Observable Sequential Decision Making. CoRR abs/2209.14997 (2022) - [i45]Yuanhao Wang, Dingwen Kong, Yu Bai, Chi Jin:
Learning Rationalizable Equilibria in Multiplayer Games. CoRR abs/2210.11402 (2022) - [i44]Jiachen Hu, Han Zhong, Chi Jin, Liwei Wang:
Provable Sim-to-real Transfer in Continuous Domain with Partial Observations. CoRR abs/2210.15598 (2022) - [i43]Chengzhuo Ni, Yuda Song, Xuezhou Zhang, Chi Jin, Mengdi Wang
:
Representation Learning for General-sum Low-rank Markov Games. CoRR abs/2210.16976 (2022) - 2021
- [j2]Chi Jin
, Praneeth Netrapalli, Rong Ge, Sham M. Kakade, Michael I. Jordan:
On Nonconvex Optimization for Machine Learning: Gradients, Stochasticity, and Saddle Points. J. ACM 68(2): 11:1-11:29 (2021) - [c34]Mo Zhou, Rong Ge, Chi Jin:
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network. COLT 2021: 4577-4632 - [c33]Dipendra Misra, Qinghua Liu, Chi Jin, John Langford:
Provable Rich Observation Reinforcement Learning with Combinatorial Latent States. ICLR 2021 - [c32]Jiachen Hu, Xiaoyu Chen, Chi Jin, Lihong Li, Liwei Wang:
Near-Optimal Representation Learning for Linear Bandits and Linear RL. ICML 2021: 4349-4358 - [c31]Qinghua Liu, Tiancheng Yu, Yu Bai, Chi Jin
:
A Sharp Analysis of Model-based Reinforcement Learning with Self-Play. ICML 2021: 7001-7010 - [c30]Nilesh Tripuraneni, Chi Jin, Michael I. Jordan:
Provable Meta-Learning of Linear Representations. ICML 2021: 10434-10443 - [c29]Chi Jin, Qinghua Liu, Sobhan Miryoosefi:
Bellman Eluder Dimension: New Rich Classes of RL Problems, and Sample-Efficient Algorithms. NeurIPS 2021: 13406-13418 - [c28]Yu Bai, Chi Jin, Huan Wang, Caiming Xiong:
Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games. NeurIPS 2021: 25799-25811 - [i42]Chi Jin, Qinghua Liu, Sobhan Miryoosefi:
Bellman Eluder Dimension: New Rich Classes of RL Problems, and Sample-Efficient Algorithms. CoRR abs/2102.00815 (2021) - [i41]Mo Zhou, Rong Ge, Chi Jin:
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network. CoRR abs/2102.02410 (2021) - [i40]Jiachen Hu, Xiaoyu Chen, Chi Jin, Lihong Li, Liwei Wang:
Near-optimal Representation Learning for Linear Bandits and Linear RL. CoRR abs/2102.04132 (2021) - [i39]Yu Bai, Chi Jin, Huan Wang, Caiming Xiong:
Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games. CoRR abs/2102.11494 (2021) - [i38]Tanner Fiez, Chi Jin, Praneeth Netrapalli, Lillian J. Ratliff:
Minimax Optimization with Smooth Algorithmic Adversaries. CoRR abs/2106.01488 (2021) - [i37]Chi Jin, Qinghua Liu, Tiancheng Yu:
The Power of Exploiter: Provable Multi-Agent RL in Large State Spaces. CoRR abs/2106.03352 (2021) - [i36]Sobhan Miryoosefi, Chi Jin:
A Simple Reward-free Approach to Constrained Reinforcement Learning. CoRR abs/2107.05216 (2021) - [i35]Xiaoyu Chen, Jiachen Hu, Chi Jin, Lihong Li, Liwei Wang:
Understanding Domain Randomization for Sim-to-real Transfer. CoRR abs/2110.03239 (2021) - [i34]Chi Jin, Qinghua Liu, Yuanhao Wang, Tiancheng Yu:
V-Learning - A Simple, Efficient, Decentralized Algorithm for Multiagent RL. CoRR abs/2110.14555 (2021) - 2020
- [c27]Chi Jin
, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan:
Provably efficient reinforcement learning with linear function approximation. COLT 2020: 2137-2143 - [c26]Tianyi Lin, Chi Jin
, Michael I. Jordan:
Near-Optimal Algorithms for Minimax Optimization. COLT 2020: 2738-2779 - [c25]Yu Bai, Chi Jin
:
Provable Self-Play Algorithms for Competitive Reinforcement Learning. ICML 2020: 551-560 - [c24]Qi Cai, Zhuoran Yang, Chi Jin, Zhaoran Wang:
Provably Efficient Exploration in Policy Optimization. ICML 2020: 1283-1294 - [c23]Chi Jin, Tiancheng Jin, Haipeng Luo, Suvrit Sra, Tiancheng Yu:
Learning Adversarial Markov Decision Processes with Bandit Feedback and Unknown Transition. ICML 2020: 4860-4869 - [c22]Chi Jin, Akshay Krishnamurthy, Max Simchowitz, Tiancheng Yu:
Reward-Free Exploration for Reinforcement Learning. ICML 2020: 4870-4879 - [c21]Chi Jin, Praneeth Netrapalli, Michael I. Jordan:
What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? ICML 2020: 4880-4889 - [c20]Tianyi Lin, Chi Jin, Michael I. Jordan:
On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems. ICML 2020: 6083-6093 - [c19]Yu Bai, Chi Jin, Tiancheng Yu:
Near-Optimal Reinforcement Learning with Self-Play. NeurIPS 2020 - [c18]Chi Jin, Sham M. Kakade, Akshay Krishnamurthy, Qinghua Liu:
Sample-Efficient Reinforcement Learning of Undercomplete POMDPs. NeurIPS 2020 - [c17]Nilesh Tripuraneni, Michael I. Jordan, Chi Jin:
On the Theory of Transfer Learning: The Importance of Task Diversity. NeurIPS 2020 - [c16]Zhuoran Yang, Chi Jin, Zhaoran Wang, Mengdi Wang, Michael I. Jordan:
Provably Efficient Reinforcement Learning with Kernel and Neural Function Approximations. NeurIPS 2020 - [i33]Tianyi Lin, Chi Jin, Michael I. Jordan:
Near-Optimal Algorithms for Minimax Optimization. CoRR abs/2002.02417 (2020) - [i32]Chi Jin, Akshay Krishnamurthy, Max Simchowitz, Tiancheng Yu:
Reward-Free Exploration for Reinforcement Learning. CoRR abs/2002.02794 (2020) - [i31]Yu Bai, Chi Jin:
Provable Self-Play Algorithms for Competitive Reinforcement Learning. CoRR abs/2002.04017 (2020) - [i30]Nilesh Tripuraneni, Chi Jin, Michael I. Jordan:
Provable Meta-Learning of Linear Representations. CoRR abs/2002.11684 (2020) - [i29]Nilesh Tripuraneni, Michael I. Jordan, Chi Jin:
On the Theory of Transfer Learning: The Importance of Task Diversity. CoRR abs/2006.11650 (2020) - [i28]Yu Bai, Chi Jin, Tiancheng Yu:
Near-Optimal Reinforcement Learning with Self-Play. CoRR abs/2006.12007 (2020) - [i27]Chi Jin, Sham M. Kakade, Akshay Krishnamurthy, Qinghua Liu:
Sample-Efficient Reinforcement Learning of Undercomplete POMDPs. CoRR abs/2006.12484 (2020) - [i26]Qinghua Liu, Tiancheng Yu, Yu Bai, Chi Jin:
A Sharp Analysis of Model-based Reinforcement Learning with Self-Play. CoRR abs/2010.01604 (2020) - [i25]Zhuoran Yang, Chi Jin, Zhaoran Wang, Mengdi Wang, Michael I. Jordan:
Bridging Exploration and General Function Approximation in Reinforcement Learning: Provably Efficient Kernel and Neural Value Iterations. CoRR abs/2011.04622 (2020)
2010 – 2019
- 2019
- [b1]Chi Jin:
Machine Learning: Why Do Simple Algorithms Work So Well? University of California, Berkeley, USA, 2019 - [i24]Chi Jin, Praneeth Netrapalli, Michael I. Jordan:
Minmax Optimization: Stable Limit Points of Gradient Descent Ascent are Locally Optimal. CoRR abs/1902.00618 (2019) - [i23]Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M. Kakade, Michael I. Jordan:
A Short Note on Concentration Inequalities for Random Vectors with SubGaussian Norm. CoRR abs/1902.03736 (2019) - [i22]Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M. Kakade, Michael I. Jordan:
Stochastic Gradient Descent Escapes Saddle Points Efficiently. CoRR abs/1902.04811 (2019) - [i21]Tianyi Lin, Chi Jin, Michael I. Jordan:
On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems. CoRR abs/1906.00331 (2019) - [i20]Chi Jin, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan:
Provably Efficient Reinforcement Learning with Linear Function Approximation. CoRR abs/1907.05388 (2019) - [i19]Qi Cai, Zhuoran Yang, Chi Jin, Zhaoran Wang:
Provably Efficient Exploration in Policy Optimization. CoRR abs/1912.05830 (2019) - 2018
- [c15]Chi Jin, Praneeth Netrapalli, Michael I. Jordan:
Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent. COLT 2018: 1042-1085 - [c14]Nilesh Tripuraneni, Mitchell Stern, Chi Jin, Jeffrey Regier, Michael I. Jordan:
Stochastic Cubic Regularization for Fast Nonconvex Optimization. NeurIPS 2018: 2904-2913 - [c13]Chi Jin, Zeyuan Allen-Zhu, Sébastien Bubeck, Michael I. Jordan:
Is Q-Learning Provably Efficient? NeurIPS 2018: 4868-4878 - [c12]Chi Jin, Lydia T. Liu, Rong Ge, Michael I. Jordan:
On the Local Minima of the Empirical Risk. NeurIPS 2018: 4901-4910 - [i18]Chi Jin, Lydia T. Liu, Rong Ge, Michael I. Jordan:
Minimizing Nonconvex Population Risk from Rough Empirical Risk. CoRR abs/1803.09357 (2018) - [i17]Yuansi Chen, Chi Jin, Bin Yu:
Stability and Convergence Trade-off of Iterative Optimization Algorithms. CoRR abs/1804.01619 (2018) - [i16]Chi Jin, Zeyuan Allen-Zhu, Sébastien Bubeck, Michael I. Jordan:
Is Q-learning Provably Efficient? CoRR abs/1807.03765 (2018) - [i15]Yi-An Ma, Yuansi Chen, Chi Jin, Nicolas Flammarion, Michael I. Jordan:
Sampling Can Be Faster Than Optimization. CoRR abs/1811.08413 (2018) - 2017
- [c11]Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli:
Global Convergence of Non-Convex Gradient Descent for Computing Matrix Squareroot. AISTATS 2017: 479-488 - [c10]Rong Ge, Chi Jin, Yi Zheng:
No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis. ICML 2017: 1233-1242 - [c9]Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan:
How to Escape Saddle Points Efficiently. ICML 2017: 1724-1732 - [c8]Simon S. Du, Chi Jin, Jason D. Lee, Michael I. Jordan, Aarti Singh, Barnabás Póczos:
Gradient Descent Can Take Exponential Time to Escape Saddle Points. NIPS 2017: 1067-1077 - [i14]Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan:
How to Escape Saddle Points Efficiently. CoRR abs/1703.00887 (2017) - [i13]Rong Ge, Chi Jin, Yi Zheng:
No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis. CoRR abs/1704.00708 (2017) - [i12]Simon S. Du, Chi Jin, Jason D. Lee, Michael I. Jordan, Barnabás Póczos, Aarti Singh:
Gradient Descent Can Take Exponential Time to Escape Saddle Points. CoRR abs/1705.10412 (2017) - [i11]Nilesh Tripuraneni, Mitchell Stern, Chi Jin, Jeffrey Regier, Michael I. Jordan:
Stochastic Cubic Regularization for Fast Nonconvex Optimization. CoRR abs/1711.02838 (2017) - [i10]Chi Jin, Praneeth Netrapalli, Michael I. Jordan:
Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent. CoRR abs/1711.10456 (2017) - 2016
- [j1]Ziteng Wang, Chi Jin, Kai Fan, Jiaqi Zhang, Junliang Huang, Yiqiao Zhong, Liwei Wang:
Differentially Private Data Releasing for Smooth Queries. J. Mach. Learn. Res. 17: 51:1-51:42 (2016) - [c7]Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford:
Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm. COLT 2016: 1147-1164 - [c6]Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli, Aaron Sidford:
Faster Eigenvector Computation via Shift-and-Invert Preconditioning. ICML 2016: 2626-2634 - [c5]Rong Ge, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford:
Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis. ICML 2016: 2741-2750 - [c4]Chi Jin, Yuchen Zhang, Sivaraman Balakrishnan, Martin J. Wainwright, Michael I. Jordan:
Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences. NIPS 2016: 4116-4124 - [c3]