Stop the war!
Остановите войну!
for scientists:
default search action
Sham M. Kakade
Sham Machandranath Kakade
Person information
- affiliation: Harvard University, Cambridge, MA, USA
- affiliation (former): University of Washington, Department of Statistics, Seattle, WA, USA
- affiliation (former): Microsoft Research New England, Cambridge, MA, USA
- affiliation (former): Toyota Technological Institute at Chicago, IL, USA
- affiliation (former): University of Pennsylvania, Department of Statistics, Philadelphia, PA, USA
- affiliation: University College London, Gatsby Computational Neuroscience Unit, UK
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j29]Motoya Ohnishi, Isao Ishikawa, Kendall Lowrey, Masahiro Ikeda, Sham M. Kakade, Yoshinobu Kawahara:
Koopman Spectrum Nonlinear Regulators and Efficient Online Learning. Trans. Mach. Learn. Res. 2024 (2024) - [c157]Depen Morwani, Benjamin L. Edelman, Costin-Andrei Oncescu, Rosie Zhao, Sham M. Kakade:
Feature emergence via margin maximization: case studies in algebraic tasks. ICLR 2024 - [c156]Nikhil Vyas, Depen Morwani, Rosie Zhao, Gal Kaplun, Sham M. Kakade, Boaz Barak:
Beyond Implicit Bias: The Insignificance of SGD Noise in Online Learning. ICML 2024 - [c155]Kenneth Li, Samy Jelassi, Hugh Zhang, Sham M. Kakade, Martin Wattenberg, David Brandfonbrener:
Q-Probe: A Lightweight Approach to Reward Maximization for Language Models. ICML 2024 - [c154]Samy Jelassi, David Brandfonbrener, Sham M. Kakade, Eran Malach:
Repeat After Me: Transformers are Better than State Space Models at Copying. ICML 2024 - [c153]Hanlin Zhang, Yifan Zhang, Yaodong Yu, Dhruv Madeka, Dean Foster, Eric P. Xing, Himabindu Lakkaraju, Sham M. Kakade:
A Study on the Calibration of In-context Learning. NAACL-HLT 2024: 6118-6136 - [i146]Samy Jelassi, David Brandfonbrener, Sham M. Kakade, Eran Malach:
Repeat After Me: Transformers are Better than State Space Models at Copying. CoRR abs/2402.01032 (2024) - [i145]Kenneth Li, Samy Jelassi, Hugh Zhang, Sham M. Kakade, Martin Wattenberg, David Brandfonbrener:
Q-Probe: A Lightweight Approach to Reward Maximization for Language Models. CoRR abs/2402.14688 (2024) - [i144]Zhenting Qi, Hanlin Zhang, Eric P. Xing, Sham M. Kakade, Himabindu Lakkaraju:
Follow My Instruction and Spill the Beans: Scalable Data Extraction from Retrieval-Augmented Generation Systems. CoRR abs/2402.17840 (2024) - [i143]Yiwen Kou, Zixiang Chen, Quanquan Gu, Sham M. Kakade:
Matching the Statistical Query Lower Bound for k-sparse Parity Problems with Stochastic Gradient Descent. CoRR abs/2404.12376 (2024) - [i142]Ethan Shen, Alan Fan, Sarah M. Pratt, Jae Sung Park, Matthew Wallingford, Sham M. Kakade, Ari Holtzman, Ranjay Krishna, Ali Farhadi, Aditya Kusupati:
Superposed Decoding: Multiple Generations from a Single Autoregressive Inference Pass. CoRR abs/2405.18400 (2024) - [i141]Licong Lin, Jingfeng Wu, Sham M. Kakade, Peter L. Bartlett, Jason D. Lee:
Scaling Laws in Linear Regression: Compute, Parameters, and Data. CoRR abs/2406.08466 (2024) - [i140]David Brandfonbrener, Hanlin Zhang, Andreas Kirsch, Jonathan Richard Schwarz, Sham M. Kakade:
CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language Model Pre-training. CoRR abs/2406.10670 (2024) - [i139]Edwin Zhang, Vincent Zhu, Naomi Saphra, Anat Kleiman, Benjamin L. Edelman, Milind Tambe, Sham M. Kakade, Eran Malach:
Transcendence: Generative Models Can Outperform The Experts That Train Them. CoRR abs/2406.11741 (2024) - [i138]Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Yitzhak Gadre, Hritik Bansal, Etash Kumar Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Reinhard Heckel, Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Albalak, Yonatan Bitton, Marianna Nezhurina, Amro Abbas, Cheng-Yu Hsieh, Dhruba Ghosh, Josh Gardner, Maciej Kilian, Hanlin Zhang, Rulin Shao, Sarah M. Pratt, Sunny Sanyal, Gabriel Ilharco, Giannis Daras, Kalyani Marathe, Aaron Gokaslan, Jieyu Zhang, Khyathi Raghavi Chandu, Thao Nguyen, Igor Vasiljevic, Sham M. Kakade, Shuran Song, Sujay Sanghavi, Fartash Faghri, Sewoong Oh, Luke Zettlemoyer, Kyle Lo, Alaaeldin El-Nouby, Hadi Pouransari, Alexander Toshev, Stephanie Wang, Dirk Groeneveld, Luca Soldaini, Pang Wei Koh, Jenia Jitsev, Thomas Kollar, Alexandros G. Dimakis, Yair Carmon, Achal Dave, Ludwig Schmidt, Vaishaal Shankar:
DataComp-LM: In search of the next generation of training sets for language models. CoRR abs/2406.11794 (2024) - [i137]Depen Morwani, Itai Shapira, Nikhil Vyas, Eran Malach, Sham M. Kakade, Lucas Janson:
A New Perspective on Shampoo's Preconditioner. CoRR abs/2406.17748 (2024) - [i136]Ziqi Wang, Hanlin Zhang, Xiner Li, Kuan-Hao Huang, Chi Han, Shuiwang Ji, Sham M. Kakade, Hao Peng, Heng Ji:
Eliminating Position Bias of Language Models: A Mechanistic Approach. CoRR abs/2407.01100 (2024) - [i135]Kaiying Hou, David Brandfonbrener, Sham M. Kakade, Samy Jelassi, Eran Malach:
Universal Length Generalization with Turing Programs. CoRR abs/2407.03310 (2024) - [i134]Rosie Zhao, Depen Morwani, David Brandfonbrener, Nikhil Vyas, Sham M. Kakade:
Deconstructing What Makes a Good Optimizer for Language Models. CoRR abs/2407.07972 (2024) - 2023
- [j28]Kaiqing Zhang, Sham M. Kakade, Tamer Basar, Lin F. Yang:
Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity. J. Mach. Learn. Res. 24: 175:1-175:53 (2023) - [c152]Gaurav Mahajan, Sham M. Kakade, Akshay Krishnamurthy, Cyril Zhang:
Learning Hidden Markov Models Using Conditional Samples. COLT 2023: 2014-2066 - [c151]Tengyang Xie, Dylan J. Foster, Yu Bai, Nan Jiang, Sham M. Kakade:
The Role of Coverage in Online Reinforcement Learning. ICLR 2023 - [c150]Dylan J. Foster, Noah Golowich, Sham M. Kakade:
Hardness of Independent Learning and Sparse Equilibrium Computation in Markov Games. ICML 2023: 10188-10221 - [c149]Nikhil Vyas, Sham M. Kakade, Boaz Barak:
On Provable Copyright Protection for Generative Models. ICML 2023: 35277-35299 - [c148]Jingfeng Wu, Difan Zou, Zixiang Chen, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Finite-Sample Analysis of Learning High-Dimensional Single ReLU Neuron. ICML 2023: 37919-37951 - [c147]Benjamin L. Edelman, Surbhi Goel, Sham M. Kakade, Eran Malach, Cyril Zhang:
Pareto Frontiers in Deep Feature Learning: Data, Compute, Width, and Luck. NeurIPS 2023 - [c146]Aniket Rege, Aditya Kusupati, Sharan Ranjit S, Alan Fan, Qingqing Cao, Sham M. Kakade, Prateek Jain, Ali Farhadi:
AdANNS: A Framework for Adaptive Semantic Search. NeurIPS 2023 - [c145]Krishna Pillutla, Vincent Roulet, Sham M. Kakade, Zaïd Harchaoui:
Modified Gauss-Newton Algorithms under Noise. SSP 2023: 51-55 - [i133]Nikhil Vyas, Sham M. Kakade, Boaz Barak:
Provable Copyright Protection for Generative Models. CoRR abs/2302.10870 (2023) - [i132]Sham M. Kakade, Akshay Krishnamurthy, Gaurav Mahajan, Cyril Zhang:
Learning Hidden Markov Models Using Conditional Samples. CoRR abs/2302.14753 (2023) - [i131]Jingfeng Wu, Difan Zou, Zixiang Chen, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Learning High-Dimensional Single-Neuron ReLU Networks with Finite Samples. CoRR abs/2303.02255 (2023) - [i130]Dylan J. Foster, Noah Golowich, Sham M. Kakade:
Hardness of Independent Learning and Sparse Equilibrium Computation in Markov Games. CoRR abs/2303.12287 (2023) - [i129]Krishna Pillutla, Vincent Roulet, Sham M. Kakade, Zaïd Harchaoui:
Modified Gauss-Newton Algorithms under Noise. CoRR abs/2305.10634 (2023) - [i128]Aniket Rege, Aditya Kusupati, Sharan Ranjit S, Alan Fan, Qingqing Cao, Sham M. Kakade, Prateek Jain, Ali Farhadi:
AdANNS: A Framework for Adaptive Semantic Search. CoRR abs/2305.19435 (2023) - [i127]Nikhil Vyas, Depen Morwani, Rosie Zhao, Gal Kaplun, Sham M. Kakade, Boaz Barak:
Beyond Implicit Bias: The Insignificance of SGD Noise in Online Learning. CoRR abs/2306.08590 (2023) - [i126]Jens Tuyls, Dhruv Madeka, Kari Torkkola, Dean P. Foster, Karthik Narasimhan, Sham M. Kakade:
Scaling Laws for Imitation Learning in NetHack. CoRR abs/2307.09423 (2023) - [i125]Benjamin L. Edelman, Surbhi Goel, Sham M. Kakade, Eran Malach, Cyril Zhang:
Pareto Frontiers in Neural Feature Learning: Data, Compute, Width, and Luck. CoRR abs/2309.03800 (2023) - [i124]Devvrit, Sneha Kudugunta, Aditya Kusupati, Tim Dettmers, Kaifeng Chen, Inderjit S. Dhillon, Yulia Tsvetkov, Hannaneh Hajishirzi, Sham M. Kakade, Ali Farhadi, Prateek Jain:
MatFormer: Nested Transformer for Elastic Inference. CoRR abs/2310.07707 (2023) - [i123]Sohrab Andaz, Carson Eisenach, Dhruv Madeka, Kari Torkkola, Randy Jia, Dean P. Foster, Sham M. Kakade:
Learning an Inventory Control Policy with General Inventory Arrival Dynamics. CoRR abs/2310.17168 (2023) - [i122]Depen Morwani, Benjamin L. Edelman, Costin-Andrei Oncescu, Rosie Zhao, Sham M. Kakade:
Feature emergence via margin maximization: case studies in algebraic tasks. CoRR abs/2311.07568 (2023) - [i121]Hanlin Zhang, Yi-Fan Zhang, Yaodong Yu, Dhruv Madeka, Dean Foster, Eric P. Xing, Himabindu Lakkaraju, Sham M. Kakade:
A Study on the Calibration of In-context Learning. CoRR abs/2312.04021 (2023) - 2022
- [j27]Krishna Pillutla, Sham M. Kakade, Zaïd Harchaoui:
Robust Aggregation for Federated Learning. IEEE Trans. Signal Process. 70: 1142-1154 (2022) - [c144]Jordan T. Ash, Cyril Zhang, Surbhi Goel, Akshay Krishnamurthy, Sham M. Kakade:
Anti-Concentrated Confidence Bonuses For Scalable Exploration. ICLR 2022 - [c143]Jens Tuyls, Shunyu Yao, Sham M. Kakade, Karthik Narasimhan:
Multi-Stage Episodic Control for Strategic Exploration in Text Games. ICLR 2022 - [c142]Benjamin L. Edelman, Surbhi Goel, Sham M. Kakade, Cyril Zhang:
Inductive Biases and Variable Creation in Self-Attention Mechanisms. ICML 2022: 5793-5831 - [c141]Yonathan Efroni, Sham M. Kakade, Akshay Krishnamurthy, Cyril Zhang:
Sparsity in Partially Controllable Linear Systems. ICML 2022: 5851-5860 - [c140]Nikunj Saunshi, Jordan T. Ash, Surbhi Goel, Dipendra Misra, Cyril Zhang, Sanjeev Arora, Sham M. Kakade, Akshay Krishnamurthy:
Understanding Contrastive Learning Requires Incorporating Inductive Biases. ICML 2022: 19250-19286 - [c139]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression. ICML 2022: 24280-24314 - [c138]Abhishek Gupta, Aldo Pacchiano, Yuexiang Zhai, Sham M. Kakade, Sergey Levine:
Unpacking Reward Shaping: Understanding the Benefits of Reward Engineering on Sample Complexity. NeurIPS 2022 - [c137]Boaz Barak, Benjamin L. Edelman, Surbhi Goel, Sham M. Kakade, Eran Malach, Cyril Zhang:
Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit. NeurIPS 2022 - [c136]Surbhi Goel, Sham M. Kakade, Adam Kalai, Cyril Zhang:
Recurrent Convolutional Neural Networks Learn Succinct Learning Algorithms. NeurIPS 2022 - [c135]Aditya Kusupati, Gantavya Bhatt, Aniket Rege, Matthew Wallingford, Aditya Sinha, Vivek Ramanujan, William Howard-Snyder, Kaifeng Chen, Sham M. Kakade, Prateek Jain, Ali Farhadi:
Matryoshka Representation Learning. NeurIPS 2022 - [c134]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
The Power and Limitation of Pretraining-Finetuning for Linear Regression under Covariate Shift. NeurIPS 2022 - [c133]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime. NeurIPS 2022 - [i120]Jens Tuyls, Shunyu Yao, Sham M. Kakade, Karthik Narasimhan:
Multi-Stage Episodic Control for Strategic Exploration in Text Games. CoRR abs/2201.01251 (2022) - [i119]Nikunj Saunshi, Jordan T. Ash, Surbhi Goel, Dipendra Misra, Cyril Zhang, Sanjeev Arora, Sham M. Kakade, Akshay Krishnamurthy:
Understanding Contrastive Learning Requires Incorporating Inductive Biases. CoRR abs/2202.14037 (2022) - [i118]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime. CoRR abs/2203.03159 (2022) - [i117]Juan C. Perdomo, Akshay Krishnamurthy, Peter L. Bartlett, Sham M. Kakade:
A Sharp Characterization of Linear Estimators for Offline Policy Evaluation. CoRR abs/2203.04236 (2022) - [i116]Aditya Kusupati, Gantavya Bhatt, Aniket Rege, Matthew Wallingford, Aditya Sinha, Vivek Ramanujan, William Howard-Snyder, Kaifeng Chen, Sham M. Kakade, Prateek Jain, Ali Farhadi:
Matryoshka Representations for Adaptive Deployment. CoRR abs/2205.13147 (2022) - [i115]Boaz Barak, Benjamin L. Edelman, Surbhi Goel, Sham M. Kakade, Eran Malach, Cyril Zhang:
Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit. CoRR abs/2207.08799 (2022) - [i114]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
The Power and Limitation of Pretraining-Finetuning for Linear Regression under Covariate Shift. CoRR abs/2208.01857 (2022) - [i113]Surbhi Goel, Sham M. Kakade, Adam Tauman Kalai, Cyril Zhang:
Recurrent Convolutional Neural Networks Learn Succinct Learning Algorithms. CoRR abs/2209.00735 (2022) - [i112]Tengyang Xie, Dylan J. Foster, Yu Bai, Nan Jiang, Sham M. Kakade:
The Role of Coverage in Online Reinforcement Learning. CoRR abs/2210.04157 (2022) - [i111]Abhishek Gupta, Aldo Pacchiano, Yuexiang Zhai, Sham M. Kakade, Sergey Levine:
Unpacking Reward Shaping: Understanding the Benefits of Reward Engineering on Sample Complexity. CoRR abs/2210.09579 (2022) - 2021
- [j26]Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M. Kakade, Michael I. Jordan:
On Nonconvex Optimization for Machine Learning: Gradients, Stochasticity, and Saddle Points. J. ACM 68(2): 11:1-11:29 (2021) - [j25]Alekh Agarwal, Sham M. Kakade, Jason D. Lee, Gaurav Mahajan:
On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift. J. Mach. Learn. Res. 22: 98:1-98:76 (2021) - [c132]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Benign Overfitting of Constant-Stepsize SGD for Linear Regression. COLT 2021: 4633-4635 - [c131]Simon Shaolei Du, Wei Hu, Sham M. Kakade, Jason D. Lee, Qi Lei:
Few-Shot Learning via Learning the Representation, Provably. ICLR 2021 - [c130]Preetum Nakkiran, Prayaag Venkat, Sham M. Kakade, Tengyu Ma:
Optimal Regularization can Mitigate Double Descent. ICLR 2021 - [c129]Ruosong Wang, Dean P. Foster, Sham M. Kakade:
What are the Statistical Limits of Offline RL with Linear Function Approximation? ICLR 2021 - [c128]Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason D. Lee, Sham M. Kakade, Huan Wang, Caiming Xiong:
How Important is the Train-Validation Split in Meta-Learning? ICML 2021: 543-553 - [c127]Simon S. Du, Sham M. Kakade, Jason D. Lee, Shachar Lovett, Gaurav Mahajan, Wen Sun, Ruosong Wang:
Bilinear Classes: A Structural Framework for Provable Generalization in RL. ICML 2021: 2826-2836 - [c126]Ruosong Wang, Yifan Wu, Ruslan Salakhutdinov, Sham M. Kakade:
Instabilities of Offline RL with Pre-Trained Neural Representation. ICML 2021: 10948-10960 - [c125]Xiyang Liu, Weihao Kong, Sham M. Kakade, Sewoong Oh:
Robust and differentially private mean estimation. NeurIPS 2021: 3887-3901 - [c124]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Dean P. Foster, Sham M. Kakade:
The Benefits of Implicit Regularization from SGD in Least Squares Problems. NeurIPS 2021: 5456-5468 - [c123]Jordan T. Ash, Surbhi Goel, Akshay Krishnamurthy, Sham M. Kakade:
Gone Fishing: Neural Active Learning with Fisher Embeddings. NeurIPS 2021: 8927-8939 - [c122]Baihe Huang, Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang:
Going Beyond Linear RL: Sample Efficient Neural Function Approximation. NeurIPS 2021: 8968-8983 - [c121]Yuanhao Wang, Ruosong Wang, Sham M. Kakade:
An Exponential Lower Bound for Linearly Realizable MDP with Constant Suboptimality Gap. NeurIPS 2021: 9521-9533 - [c120]Aditya Kusupati, Matthew Wallingford, Vivek Ramanujan, Raghav Somani, Jae Sung Park, Krishna Pillutla, Prateek Jain, Sham M. Kakade, Ali Farhadi:
LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes. NeurIPS 2021: 23900-23913 - [c119]Baihe Huang, Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang:
Optimal Gradient-based Algorithms for Non-concave Bandit Optimization. NeurIPS 2021: 29101-29115 - [i110]Xiyang Liu, Weihao Kong, Sham M. Kakade, Sewoong Oh:
Robust and Differentially Private Mean Estimation. CoRR abs/2102.09159 (2021) - [i109]Ruosong Wang, Yifan Wu, Ruslan Salakhutdinov, Sham M. Kakade:
Instabilities of Offline RL with Pre-Trained Neural Representation. CoRR abs/2103.04947 (2021) - [i108]Simon S. Du, Sham M. Kakade, Jason D. Lee, Shachar Lovett, Gaurav Mahajan, Wen Sun, Ruosong Wang:
Bilinear Classes: A Structural Framework for Provable Generalization in RL. CoRR abs/2103.10897 (2021) - [i107]Yuanhao Wang, Ruosong Wang, Sham M. Kakade:
An Exponential Lower Bound for Linearly-Realizable MDPs with Constant Suboptimality Gap. CoRR abs/2103.12690 (2021) - [i106]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Benign Overfitting of Constant-Stepsize SGD for Linear Regression. CoRR abs/2103.12692 (2021) - [i105]Aditya Kusupati, Matthew Wallingford, Vivek Ramanujan, Raghav Somani, Jae Sung Park, Krishna Pillutla, Prateek Jain, Sham M. Kakade, Ali Farhadi:
LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes. CoRR abs/2106.01487 (2021) - [i104]Jordan T. Ash, Surbhi Goel, Akshay Krishnamurthy, Sham M. Kakade:
Gone Fishing: Neural Active Learning with Fisher Embeddings. CoRR abs/2106.09675 (2021) - [i103]Motoya Ohnishi, Isao Ishikawa, Kendall Lowrey, Masahiro Ikeda, Sham M. Kakade, Yoshinobu Kawahara:
Koopman Spectrum Nonlinear Regulator and Provably Efficient Online Learning. CoRR abs/2106.15775 (2021) - [i102]Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei:
A Short Note on the Relationship of Information Gain and Eluder Dimension. CoRR abs/2107.02377 (2021) - [i101]Baihe Huang, Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang:
Optimal Gradient-based Algorithms for Non-concave Bandit Optimization. CoRR abs/2107.04518 (2021) - [i100]Baihe Huang, Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang:
Going Beyond Linear RL: Sample Efficient Neural Function Approximation. CoRR abs/2107.06466 (2021) - [i99]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Dean P. Foster, Sham M. Kakade:
The Benefits of Implicit Regularization from SGD in Least Squares Problems. CoRR abs/2108.04552 (2021) - [i98]Yonathan Efroni, Sham M. Kakade, Akshay Krishnamurthy, Cyril Zhang:
Sparsity in Partially Controllable Linear Systems. CoRR abs/2110.06150 (2021) - [i97]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression. CoRR abs/2110.06198 (2021) - [i96]Benjamin L. Edelman, Surbhi Goel, Sham M. Kakade, Cyril Zhang:
Inductive Biases and Variable Creation in Self-Attention Mechanisms. CoRR abs/2110.10090 (2021) - [i95]Jordan T. Ash, Cyril Zhang, Surbhi Goel, Akshay Krishnamurthy, Sham M. Kakade:
Anti-Concentrated Confidence Bonuses for Scalable Exploration. CoRR abs/2110.11202 (2021) - [i94]Dylan J. Foster, Sham M. Kakade, Jian Qian, Alexander Rakhlin:
The Statistical Complexity of Interactive Decision Making. CoRR abs/2112.13487 (2021) - 2020
- [j24]Justin Chan, Landon P. Cox, Dean P. Foster, Shyam Gollakota, Eric Horvitz, Joseph Jaeger, Sham M. Kakade, Tadayoshi Kohno, John Langford, Jonathan Larson, Puneet Sharma, Sudheesh Singanamalla, Jacob E. Sunshine, Stefano Tessaro:
PACT: Privacy-Sensitive Protocols And Mechanisms for Mobile Contact Tracing. IEEE Data Eng. Bull. 43(2): 15-35 (2020) - [j23]Damek Davis, Dmitriy Drusvyatskiy, Sham M. Kakade, Jason D. Lee:
Stochastic Subgradient Method Converges on Tame Functions. Found. Comput. Math. 20(1): 119-154 (2020) - [c118]Naman Agarwal, Sham M. Kakade, Rahul Kidambi, Yin Tat Lee, Praneeth Netrapalli, Aaron Sidford:
Leverage Score Sampling for Faster Accelerated Regression and ERM. ALT 2020: 22-47 - [c117]Elad Hazan, Sham M. Kakade, Karan Singh:
The Nonstochastic Control Problem. ALT 2020: 408-421 - [c116]Alekh Agarwal, Sham M. Kakade, Jason D. Lee, Gaurav Mahajan:
Optimality and Approximation with Policy Gradient Methods in Markov Decision Processes. COLT 2020: 64-66 - [c115]Alekh Agarwal, Sham M. Kakade, Lin F. Yang:
Model-Based Reinforcement Learning with a Generative Model is Minimax Optimal. COLT 2020: 67-83 - [c114]Simon S. Du, Sham M. Kakade, Ruosong Wang, Lin F. Yang:
Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning? ICLR 2020 - [c113]Sanjeev Arora, Simon S. Du, Sham M. Kakade, Yuping Luo, Nikunj Saunshi:
Provable Representation Learning for Imitation Learning via Bi-level Optimization. ICML 2020: 367-376 - [c112]Mark Braverman, Xinyi Chen, Sham M. Kakade, Karthik Narasimhan, Cyril Zhang, Yi Zhang:
Calibration, Entropy Rates, and Memory in Language Models. ICML 2020: 1089-1099 - [c111]Weihao Kong, Raghav Somani, Zhao Song, Sham M. Kakade, Sewoong Oh:
Meta-learning for Mixed Linear Regression. ICML 2020: 5394-5404 - [c110]Aditya Kusupati, Vivek Ramanujan, Raghav Somani, Mitchell Wortsman, Prateek Jain, Sham M. Kakade, Ali Farhadi:
Soft Threshold Weight Reparameterization for Learnable Sparsity. ICML 2020: 5544-5555 - [c109]Colin Wei, Sham M. Kakade, Tengyu Ma:
The Implicit and Explicit Regularization Effects of Dropout. ICML 2020: 10181-10192 - [c108]Alekh Agarwal, Mikael Henaff, Sham M. Kakade, Wen Sun:
PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning. NeurIPS 2020 - [c107]Alekh Agarwal, Sham M. Kakade, Akshay Krishnamurthy, Wen Sun:
FLAMBE: Structural Complexity and Representation Learning of Low Rank MDPs. NeurIPS 2020 - [c106]Chi Jin, Sham M. Kakade, Akshay Krishnamurthy, Qinghua Liu:
Sample-Efficient Reinforcement Learning of Undercomplete POMDPs. NeurIPS 2020 - [c105]Sham M. Kakade, Akshay Krishnamurthy, Kendall Lowrey, Motoya Ohnishi, Wen Sun:
Information Theoretic Regret Bounds for Online Nonlinear Control. NeurIPS 2020 - [c104]Weihao Kong, Raghav Somani, Sham M. Kakade, Sewoong Oh:
Robust Meta-learning for Mixed Linear Regression with Small Batches. NeurIPS 2020 - [c103]Ruosong Wang, Simon S. Du, Lin F. Yang, Sham M. Kakade:
Is Long Horizon RL More Difficult Than Short Horizon RL? NeurIPS 2020 - [c102]Kaiqing Zhang, Sham M. Kakade, Tamer Basar, Lin F. Yang:
Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity. NeurIPS 2020 - [i93]Aditya Kusupati, Vivek Ramanujan, Raghav Somani, Mitchell Wortsman, Prateek Jain, Sham M. Kakade, Ali Farhadi:
Soft Threshold Weight Reparameterization for Learnable Sparsity. CoRR abs/2002.03231 (2020) - [i92]Weihao Kong, Raghav Somani, Zhao Song, Sham M. Kakade, Sewoong Oh:
Meta-learning for mixed linear regression. CoRR abs/2002.08936 (2020) - [i91]Simon S. Du, Wei Hu, Sham M. Kakade, Jason D. Lee, Qi Lei:
Few-Shot Learning via Learning the Representation, Provably. CoRR abs/2002.09434 (2020) - [i90]Sanjeev Arora, Simon S. Du, Sham M. Kakade, Yuping Luo, Nikunj Saunshi:
Provable Representation Learning for Imitation Learning via Bi-level Optimization. CoRR abs/2002.10544 (2020) - [i89]