


Остановите войну!
for scientists:


default search action
Sanjeev Arora
Person information

- affiliation: Princeton University, NJ, USA
- award (2012): Fulkerson Prize
- award (2011): ACM Prize in Computing
- award (2001, 2010): Gödel Prize
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [c107]Xinran Gu, Kaifeng Lyu, Longbo Huang, Sanjeev Arora:
Why (and When) does Local SGD Generalize Better than SGD? ICLR 2023 - [c106]Nikunj Saunshi, Arushi Gupta, Mark Braverman, Sanjeev Arora:
Understanding Influence Functions and Datamodels via Harmonic Analysis. ICLR 2023 - [c105]Sadhika Malladi, Alexander Wettig, Dingli Yu, Danqi Chen, Sanjeev Arora:
A Kernel-Based View of Language Model Fine-Tuning. ICML 2023: 23610-23641 - [c104]Abhishek Panigrahi, Nikunj Saunshi, Haoyu Zhao, Sanjeev Arora:
Task-Specific Skill Localization in Fine-tuned Language Models. ICML 2023: 27011-27033 - [i78]Abhishek Panigrahi, Nikunj Saunshi, Haoyu Zhao, Sanjeev Arora:
Task-Specific Skill Localization in Fine-tuned Language Models. CoRR abs/2302.06600 (2023) - [i77]Xinran Gu, Kaifeng Lyu, Longbo Huang, Sanjeev Arora:
Why (and When) does Local SGD Generalize Better than SGD? CoRR abs/2303.01215 (2023) - [i76]Haoyu Zhao, Abhishek Panigrahi, Rong Ge, Sanjeev Arora:
Do Transformers Parse while Predicting the Masked Word? CoRR abs/2303.08117 (2023) - [i75]Sadhika Malladi, Tianyu Gao, Eshaan Nichani, Alex Damian, Jason D. Lee, Danqi Chen, Sanjeev Arora:
Fine-Tuning Language Models with Just Forward Passes. CoRR abs/2305.17333 (2023) - [i74]Abhishek Panigrahi, Sadhika Malladi, Mengzhou Xia, Sanjeev Arora:
Trainable Transformer in Transformer. CoRR abs/2307.01189 (2023) - [i73]Sanjeev Arora, Anirudh Goyal:
A Theory for Emergence of Complex Skills in Language Models. CoRR abs/2307.15936 (2023) - 2022
- [c103]Zhiyuan Li, Tianhao Wang, Sanjeev Arora:
What Happens after SGD Reaches Zero Loss? --A Mathematical Framework. ICLR 2022 - [c102]Yi Zhang, Arushi Gupta, Nikunj Saunshi, Sanjeev Arora:
On Predicting Generalization using GANs. ICLR 2022 - [c101]Sanjeev Arora, Zhiyuan Li, Abhishek Panigrahi:
Understanding Gradient Descent on the Edge of Stability in Deep Learning. ICML 2022: 948-1024 - [c100]Nikunj Saunshi, Jordan T. Ash, Surbhi Goel, Dipendra Misra, Cyril Zhang, Sanjeev Arora, Sham M. Kakade, Akshay Krishnamurthy:
Understanding Contrastive Learning Requires Incorporating Inductive Biases. ICML 2022: 19250-19286 - [c99]Zhiyuan Li, Tianhao Wang, Jason D. Lee, Sanjeev Arora:
Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror Descent. NeurIPS 2022 - [c98]Arushi Gupta, Nikunj Saunshi, Dingli Yu, Kaifeng Lyu, Sanjeev Arora:
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound. NeurIPS 2022 - [c97]Kaifeng Lyu, Zhiyuan Li, Sanjeev Arora:
Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction. NeurIPS 2022 - [c96]Sadhika Malladi, Kaifeng Lyu, Abhishek Panigrahi, Sanjeev Arora:
On the SDEs and Scaling Rules for Adaptive Gradient Algorithms. NeurIPS 2022 - [i72]Nikunj Saunshi, Jordan T. Ash, Surbhi Goel, Dipendra Misra, Cyril Zhang, Sanjeev Arora, Sham M. Kakade, Akshay Krishnamurthy:
Understanding Contrastive Learning Requires Incorporating Inductive Biases. CoRR abs/2202.14037 (2022) - [i71]Zhou Lu, Wenhan Xia, Sanjeev Arora, Elad Hazan:
Adaptive Gradient Methods with Local Guarantees. CoRR abs/2203.01400 (2022) - [i70]Sanjeev Arora, Zhiyuan Li, Abhishek Panigrahi:
Understanding Gradient Descent on Edge of Stability in Deep Learning. CoRR abs/2205.09745 (2022) - [i69]Sadhika Malladi, Kaifeng Lyu, Abhishek Panigrahi, Sanjeev Arora:
On the SDEs and Scaling Rules for Adaptive Gradient Algorithms. CoRR abs/2205.10287 (2022) - [i68]Kaifeng Lyu, Zhiyuan Li, Sanjeev Arora:
Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction. CoRR abs/2206.07085 (2022) - [i67]Zhiyuan Li, Tianhao Wang, Jason D. Lee, Sanjeev Arora:
Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror Descent. CoRR abs/2207.04036 (2022) - [i66]Nikunj Saunshi, Arushi Gupta, Mark Braverman, Sanjeev Arora:
Understanding Influence Functions and Datamodels via Harmonic Analysis. CoRR abs/2210.01072 (2022) - [i65]Sadhika Malladi, Alexander Wettig, Dingli Yu, Danqi Chen, Sanjeev Arora:
A Kernel-Based View of Language Model Fine-Tuning. CoRR abs/2210.05643 (2022) - [i64]Arushi Gupta, Nikunj Saunshi, Dingli Yu, Kaifeng Lyu, Sanjeev Arora:
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound. CoRR abs/2211.02912 (2022) - 2021
- [j38]Sanjeev Arora:
Technical perspective: Why don't today's deep nets overfit to their training data? Commun. ACM 64(3): 106 (2021) - [c95]Zhiyuan Li, Yi Zhang, Sanjeev Arora:
Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets? ICLR 2021 - [c94]Nikunj Saunshi, Sadhika Malladi, Sanjeev Arora:
A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks. ICLR 2021 - [c93]Yangsibo Huang, Samyak Gupta, Zhao Song, Kai Li, Sanjeev Arora:
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning. NeurIPS 2021: 7232-7241 - [c92]Zhiyuan Li, Sadhika Malladi, Sanjeev Arora:
On the Validity of Modeling SGD with Stochastic Differential Equations (SDEs). NeurIPS 2021: 12712-12725 - [c91]Kaifeng Lyu, Zhiyuan Li, Runzhe Wang, Sanjeev Arora:
Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias. NeurIPS 2021: 12978-12991 - [c90]Sanjeev Arora:
Opening the Black Box of Deep Learning: Some Lessons and Take-aways. SIGMETRICS (Abstracts) 2021: 1 - [i63]Zhiyuan Li, Sadhika Malladi, Sanjeev Arora:
On the Validity of Modeling SGD with Stochastic Differential Equations (SDEs). CoRR abs/2102.12470 (2021) - [i62]Sanjeev Arora, Yi Zhang:
Rip van Winkle's Razor: A Simple Estimate of Overfit to Test Data. CoRR abs/2102.13189 (2021) - [i61]Zhiyuan Li, Tianhao Wang, Sanjeev Arora:
What Happens after SGD Reaches Zero Loss? -A Mathematical Framework. CoRR abs/2110.06914 (2021) - [i60]Kaifeng Lyu, Zhiyuan Li, Runzhe Wang, Sanjeev Arora:
Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias. CoRR abs/2110.13905 (2021) - [i59]Yi Zhang, Arushi Gupta, Nikunj Saunshi, Sanjeev Arora:
On Predicting Generalization using GANs. CoRR abs/2111.14212 (2021) - [i58]Yangsibo Huang, Samyak Gupta, Zhao Song, Kai Li, Sanjeev Arora:
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning. CoRR abs/2112.00059 (2021) - 2020
- [c89]Yangsibo Huang, Zhao Song, Danqi Chen, Kai Li, Sanjeev Arora:
TextHide: Tackling Data Privacy for Language Understanding Tasks. EMNLP (Findings) 2020: 1368-1382 - [c88]Sanjeev Arora:
The Quest for Mathematical Understanding of Deep Learning (Invited Talk). FSTTCS 2020: 1:1-1:1 - [c87]Zhiyuan Li, Sanjeev Arora:
An Exponential Learning Rate Schedule for Deep Learning. ICLR 2020 - [c86]Sanjeev Arora, Simon S. Du, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang, Dingli Yu:
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks. ICLR 2020 - [c85]Sanjeev Arora, Simon S. Du, Sham M. Kakade, Yuping Luo, Nikunj Saunshi:
Provable Representation Learning for Imitation Learning via Bi-level Optimization. ICML 2020: 367-376 - [c84]Yangsibo Huang, Zhao Song, Kai Li, Sanjeev Arora:
InstaHide: Instance-hiding Schemes for Private Distributed Learning. ICML 2020: 4507-4518 - [c83]Nikunj Saunshi, Yi Zhang, Mikhail Khodak, Sanjeev Arora:
A Sample Complexity Separation between Non-Convex and Convex Meta-Learning. ICML 2020: 8512-8521 - [c82]Zhiyuan Li, Kaifeng Lyu, Sanjeev Arora:
Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate. NeurIPS 2020 - [c81]Yi Zhang, Orestis Plevrakis, Simon S. Du, Xingguo Li, Zhao Song, Sanjeev Arora:
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality. NeurIPS 2020 - [i57]Yi Zhang, Orestis Plevrakis, Simon S. Du, Xingguo Li, Zhao Song, Sanjeev Arora:
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality. CoRR abs/2002.06668 (2020) - [i56]Sanjeev Arora, Simon S. Du, Sham M. Kakade, Yuping Luo, Nikunj Saunshi:
Provable Representation Learning for Imitation Learning via Bi-level Optimization. CoRR abs/2002.10544 (2020) - [i55]Nikunj Saunshi, Yi Zhang, Mikhail Khodak, Sanjeev Arora:
A Sample Complexity Separation between Non-Convex and Convex Meta-Learning. CoRR abs/2002.11172 (2020) - [i54]Yangsibo Huang, Yushan Su, Sachin Ravi, Zhao Song, Sanjeev Arora, Kai Li:
Privacy-preserving Learning via Deep Net Pruning. CoRR abs/2003.01876 (2020) - [i53]Yangsibo Huang, Zhao Song, Kai Li, Sanjeev Arora:
InstaHide: Instance-hiding Schemes for Private Distributed Learning. CoRR abs/2010.02772 (2020) - [i52]Zhiyuan Li, Kaifeng Lyu, Sanjeev Arora:
Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate. CoRR abs/2010.02916 (2020) - [i51]Nikunj Saunshi, Sadhika Malladi, Sanjeev Arora:
A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks. CoRR abs/2010.03648 (2020) - [i50]Yangsibo Huang, Zhao Song, Danqi Chen, Kai Li, Sanjeev Arora:
TextHide: Tackling Data Privacy in Language Understanding Tasks. CoRR abs/2010.06053 (2020) - [i49]Zhiyuan Li, Yi Zhang, Sanjeev Arora:
Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets? CoRR abs/2010.08515 (2020)
2010 – 2019
- 2019
- [c80]Sanjeev Arora, Nadav Cohen, Noah Golowich, Wei Hu:
A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks. ICLR (Poster) 2019 - [c79]Sanjeev Arora, Zhiyuan Li, Kaifeng Lyu:
Theoretical Analysis of Auto Rate-Tuning by Batch Normalization. ICLR (Poster) 2019 - [c78]Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruosong Wang:
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks. ICML 2019: 322-332 - [c77]Nikunj Saunshi, Orestis Plevrakis, Sanjeev Arora, Mikhail Khodak, Hrishikesh Khandeparkar:
A Theoretical Analysis of Contrastive Unsupervised Representation Learning. ICML 2019: 5628-5637 - [c76]Sanjeev Arora, Nadav Cohen, Wei Hu, Yuping Luo:
Implicit Regularization in Deep Matrix Factorization. NeurIPS 2019: 7411-7422 - [c75]Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang:
On Exact Computation with an Infinitely Wide Neural Net. NeurIPS 2019: 8139-8148 - [c74]Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Rong Ge, Sanjeev Arora:
Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets. NeurIPS 2019: 14574-14583 - [i48]Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruosong Wang:
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks. CoRR abs/1901.08584 (2019) - [i47]Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, Nikunj Saunshi:
A Theoretical Analysis of Contrastive Unsupervised Representation Learning. CoRR abs/1902.09229 (2019) - [i46]Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang:
On Exact Computation with an Infinitely Wide Neural Net. CoRR abs/1904.11955 (2019) - [i45]Arushi Gupta, Sanjeev Arora:
A Simple Saliency Method That Passes the Sanity Checks. CoRR abs/1905.12152 (2019) - [i44]Sanjeev Arora, Nadav Cohen, Wei Hu, Yuping Luo:
Implicit Regularization in Deep Matrix Factorization. CoRR abs/1905.13655 (2019) - [i43]Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Sanjeev Arora, Rong Ge:
Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets. CoRR abs/1906.06247 (2019) - [i42]Sanjeev Arora, Simon S. Du, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang, Dingli Yu:
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks. CoRR abs/1910.01663 (2019) - [i41]Zhiyuan Li, Sanjeev Arora:
An Exponential Learning Rate Schedule for Deep Learning. CoRR abs/1910.07454 (2019) - [i40]Zhiyuan Li, Ruosong Wang, Dingli Yu, Simon S. Du, Wei Hu, Ruslan Salakhutdinov, Sanjeev Arora:
Enhanced Convolutional Neural Tangent Kernels. CoRR abs/1911.00809 (2019) - 2018
- [j37]Sanjeev Arora, Rong Ge, Yoni Halpern, David M. Mimno, Ankur Moitra, David A. Sontag, Yichen Wu, Michael Zhu:
Learning topic models - provably and efficiently. Commun. ACM 61(4): 85-93 (2018) - [j36]Kiran Vodrahalli, Po-Hsuan Chen, Yingyu Liang, Christopher Baldassano
, Janice Chen, Esther Yong, Christopher J. Honey
, Uri Hasson
, Peter J. Ramadge, Kenneth A. Norman, Sanjeev Arora:
Mapping between fMRI responses to movies and their natural language annotations. NeuroImage 180(Part): 223-231 (2018) - [j35]Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, Andrej Risteski:
Linear Algebraic Structure of Word Senses, with Applications to Polysemy. Trans. Assoc. Comput. Linguistics 6: 483-495 (2018) - [c73]Mikhail Khodak, Nikunj Saunshi, Yingyu Liang, Tengyu Ma, Brandon Stewart, Sanjeev Arora:
A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors. ACL (1) 2018: 12-22 - [c72]Sanjeev Arora, Wei Hu, Pravesh K. Kothari:
An Analysis of the t-SNE Algorithm for Data Visualization. COLT 2018: 1455-1462 - [c71]Sanjeev Arora, Elad Hazan, Holden Lee, Karan Singh, Cyril Zhang, Yi Zhang:
Towards Provable Control for Unknown Linear Dynamical Systems. ICLR (Workshop) 2018 - [c70]Sanjeev Arora, Mikhail Khodak, Nikunj Saunshi, Kiran Vodrahalli:
A Compressed Sensing View of Unsupervised Text Embeddings, Bag-of-n-Grams, and LSTMs. ICLR (Poster) 2018 - [c69]Sanjeev Arora, Andrej Risteski, Yi Zhang:
Do GANs learn the distribution? Some Theory and Empirics. ICLR (Poster) 2018 - [c68]Sanjeev Arora, Nadav Cohen, Elad Hazan:
On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization. ICML 2018: 244-253 - [c67]Sanjeev Arora, Rong Ge, Behnam Neyshabur, Yi Zhang:
Stronger Generalization Bounds for Deep Nets via a Compression Approach. ICML 2018: 254-263 - [i39]Sanjeev Arora, Rong Ge, Behnam Neyshabur, Yi Zhang:
Stronger generalization bounds for deep nets via a compression approach. CoRR abs/1802.05296 (2018) - [i38]Sanjeev Arora, Nadav Cohen, Elad Hazan:
On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization. CoRR abs/1802.06509 (2018) - [i37]Sanjeev Arora, Wei Hu, Pravesh K. Kothari:
An Analysis of the t-SNE Algorithm for Data Visualization. CoRR abs/1803.01768 (2018) - [i36]Mikhail Khodak, Nikunj Saunshi, Yingyu Liang, Tengyu Ma, Brandon Stewart, Sanjeev Arora:
A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors. CoRR abs/1805.05388 (2018) - [i35]Sanjeev Arora, Nadav Cohen, Noah Golowich, Wei Hu:
A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks. CoRR abs/1810.02281 (2018) - [i34]Sanjeev Arora, Zhiyuan Li, Kaifeng Lyu:
Theoretical Analysis of Auto Rate-Tuning by Batch Normalization. CoRR abs/1812.03981 (2018) - 2017
- [c66]Holden Lee, Rong Ge, Tengyu Ma, Andrej Risteski, Sanjeev Arora:
On the Ability of Neural Nets to Express Distributions. COLT 2017: 1271-1296 - [c65]Sanjeev Arora, Yingyu Liang, Tengyu Ma:
A Simple but Tough-to-Beat Baseline for Sentence Embeddings. ICLR (Poster) 2017 - [c64]Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, Yi Zhang:
Generalization and Equilibrium in Generative Adversarial Nets (GANs). ICML 2017: 224-232 - [c63]Sanjeev Arora, Rong Ge, Tengyu Ma, Andrej Risteski:
Provable learning of noisy-OR networks. STOC 2017: 1057-1066 - [i33]Holden Lee, Rong Ge, Andrej Risteski, Tengyu Ma, Sanjeev Arora:
On the ability of neural nets to express distributions. CoRR abs/1702.07028 (2017) - [i32]Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, Yi Zhang:
Generalization and Equilibrium in Generative Adversarial Nets (GANs). CoRR abs/1703.00573 (2017) - [i31]Mikhail Khodak, Andrej Risteski, Christiane Fellbaum, Sanjeev Arora:
Extending and Improving Wordnet via Unsupervised Word Embeddings. CoRR abs/1705.00217 (2017) - [i30]Sanjeev Arora, Andrej Risteski:
Provable benefits of representation learning. CoRR abs/1706.04601 (2017) - [i29]Sanjeev Arora, Yi Zhang:
Do GANs actually learn the distribution? An empirical study. CoRR abs/1706.08224 (2017) - [i28]Sanjeev Arora, Andrej Risteski, Yi Zhang:
Theoretical limitations of Encoder-Decoder GAN architectures. CoRR abs/1711.02651 (2017) - 2016
- [j34]Sanjeev Arora, Satyen Kale:
A Combinatorial, Primal-Dual Approach to Semidefinite Programs. J. ACM 63(2): 12:1-12:35 (2016) - [j33]Sanjeev Arora, Rong Ge, Ravi Kannan, Ankur Moitra:
Computing a Nonnegative Matrix Factorization - Provably. SIAM J. Comput. 45(4): 1582-1611 (2016) - [j32]Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, Andrej Risteski:
A Latent Variable Model Approach to PMI-based Word Embeddings. Trans. Assoc. Comput. Linguistics 4: 385-399 (2016) - [c62]Sanjeev Arora, Rong Ge, Frederic Koehler, Tengyu Ma, Ankur Moitra:
Provable Algorithms for Inference in Topic Models. ICML 2016: 2859-2867 - [i27]Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, Andrej Risteski:
Linear Algebraic Structure of Word Senses, with Applications to Polysemy. CoRR abs/1601.03764 (2016) - [i26]Sanjeev Arora, Rong Ge, Frederic Koehler, Tengyu Ma, Ankur Moitra:
Provable Algorithms for Inference in Topic Models. CoRR abs/1605.08491 (2016) - [i25]Kiran Vodrahalli, Po-Hsuan Chen, Yingyu Liang, Janice Chen, Esther Yong, Christopher J. Honey, Peter J. Ramadge, Kenneth A. Norman, Sanjeev Arora:
Mapping Between Natural Movie fMRI Responses and Word-Sequence Representations. CoRR abs/1610.03914 (2016) - [i24]Sanjeev Arora, Rong Ge, Tengyu Ma, Andrej Risteski:
Provable learning of Noisy-or Networks. CoRR abs/1612.08795 (2016) - 2015
- [j31]Sanjeev Arora, Rong Ge, Ankur Moitra, Sushant Sachdeva
:
Provable ICA with Unknown Gaussian Noise, and Implications for Gaussian Mixtures and Autoencoders. Algorithmica 72(1): 215-236 (2015) - [j30]Sanjeev Arora, Vinay Kumar Nangia, Rajat Agrawal
:
Making strategy process intelligent with business intelligence: an empirical investigation. Int. J. Data Anal. Tech. Strateg. 7(1): 77-95 (2015) - [j29]Sanjeev Arora, Boaz Barak, David Steurer:
Subexponential Algorithms for Unique Games and Related Problems. J. ACM 62(5): 42:1-42:25 (2015) - [c61]Sanjeev Arora, Rong Ge, Tengyu Ma, Ankur Moitra:
Simple, Efficient, and Neural Algorithms for Sparse Coding. COLT 2015: 113-149 - [c60]Sanjeev Arora:
Overcoming Intractability in Unsupervised Learning (Invited Talk). STACS 2015: 1-1 - [i23]Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, Andrej Risteski:
Random Walks on Context Spaces: Towards an Explanation of the Mysteries of Semantic Word Embeddings. CoRR abs/1502.03520 (2015) - [i22]Sanjeev Arora, Rong Ge, Tengyu Ma, Ankur Moitra:
Simple, Efficient, and Neural Algorithms for Sparse Coding. CoRR abs/1503.00778 (2015) - [i21]Sanjeev Arora, Yingyu Liang, Tengyu Ma:
Why are deep nets reversible: A simple theory, with implications for training. CoRR abs/1511.05653 (2015) - 2014
- [j28]Sanjeev Arora:
Thoughts on Paper Publishing in the Digital Age. Bull. EATCS 112 (2014) - [c59]Sanjeev Arora, Rong Ge, Ankur Moitra:
New Algorithms for Learning Incoherent and Overcomplete Dictionaries. COLT 2014: 779-806 - [c58]Sanjeev Arora, Aditya Bhaskara, Rong Ge, Tengyu Ma:
Provable Bounds for Learning Some Deep Representations. ICML 2014: 584-592 - [c57]Prabhat Chand
, Pratima Murthy, Vivek Gupta, Arun Kandasamy, Deepak Jayarajan, Lakshmanan Sethu
, Vivek Benegal
, Mathew Varghese
, Miriam Komaromy, Sanjeev Arora:
Technology Enhanced Learning in Addiction Mental Health: Developing a Virtual Knowledge Network: NIMHANS ECHO. T4E 2014: 229-232 - [i20]Sanjeev Arora, Aditya Bhaskara, Rong Ge, Tengyu Ma:
More Algorithms for Provable Dictionary Learning. CoRR abs/1401.0579 (2014) - 2013
- [c56]Sanjeev Arora, Rong Ge, Ali Kemal Sinop:
Towards a Better Approximation for Sparsest Cut? FOCS 2013: 270-279 - [c55]Sanjeev Arora, Rong Ge, Yonatan Halpern, David M. Mimno, Ankur Moitra, David A. Sontag, Yichen Wu, Michael Zhu:
A Practical Algorithm for Topic Modeling with Provable Guarantees. ICML (2) 2013: 280-288 - [i19]Sanjeev Arora, Rong Ge, Ali Kemal Sinop:
Towards a better approximation for sparsest cut? CoRR abs/1304.3365 (2013) - [i18]Sanjeev Arora, Rong Ge, Ankur Moitra:
New Algorithms for Learning Incoherent and Overcomplete Dictionaries. CoRR abs/1308.6273 (2013) - [i17]Sanjeev Arora, Aditya Bhaskara, Rong Ge, Tengyu Ma:
Provable Bounds for Learning Some Deep Representations. CoRR abs/1310.6343 (2013) - 2012
- [j27]Sanjeev Arora:
The Gödel Price 2013. Call for Nominations. Bull. EATCS 108: 17-21 (2012) - [j26]Sanjeev Arora, László Lovász, Ilan Newman, Yuval Rabani,