default search action
Lin Yang 0011
Lin F. Yang
Person information
- affiliation: University of California, Los Angeles, CA, USA
Other persons with the same name
- Lin Yang — disambiguation page
- Lin Yang 0001 — Alcatel-Lucent Bell Labs, Shanghai, China (and 1 more)
- Lin Yang 0002 — Westlake University, School of Engineering, Artificial Intelligence and Biomedical Image Analysis Lab, Hangzhou, China (and 5 more)
- Lin Yang 0003 — University of Notre Dame, IN, USA
- Lin Yang 0004 — University of Electronic Science and Technology of China, National Key Laboratory of Science and Technology on Communications, Chengdu, China
- Lin Yang 0005 — Wuhan University, School of Electrical Engineering and Automation, China
- Lin Yang 0006 — Xidian University, State Key Discipline Laboratory of Wide Bandgap Semiconductor Technology, Xian, China
- Lin Yang 0007 — China University of Geoscience, Faculty of Information Engineering, Wuhan, China
- Lin Yang 0008 — Wuhan University of Technology, School of Logistics Engineering, China
- Lin Yang 0009 — Huawei Noah's Ark Lab, China (and 1 more)
- Lin Yang 0010 — University of Minnesota, Department of Electrical and Computer Engineering, MN, USA
- Lin Yang 0013 — Nanjing University, School of Intelligence Science and Technology, Nanjing, China (and 3 more)
- Lin Yang 0014 — Lenovo Research, AI Lab, Beijing, China
- Lin Yang 0015 — Gyrfalcon Technology Inc., Milpitas, CA, USA
- Lin Yang 0016 — Chinese Academy of Sciences, Institute of Acoustics, Key Laboratory of Speech Acoustics and Content Understanding, Beijing, China
- Lin Yang 0017 — South China University of Technology, School of Electric Power, Guangzhou, China (and 1 more)
- Lin Yang 0018 — Nanjing University, School of Geography and Ocean Science, Nanjing, China (and 1 more)
- Lin Yang 0019 — University of Minnesota, Department of Biomedical Engineering, Minneapolis, MN, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j10]Andrea Soltoggio, Eseoghene Ben-Iwhiwhu, Vladimir Braverman, Eric Eaton, Benjamin Epstein, Yunhao Ge, Lucy Halperin, Jonathan P. How, Laurent Itti, Michael A. Jacobs, Pavan Kantharaju, Long Le, Steven Lee, Xinran Liu, Sildomar T. Monteiro, David Musliner, Saptarshi Nath, Priyadarshini Panda, Christos Peridis, Hamed Pirsiavash, Vishwa S. Parekh, Kaushik Roy, Shahaf S. Shperberg, Hava T. Siegelmann, Peter Stone, Kyle Vedder, Jingfeng Wu, Lin Yang, Guangyao Zheng, Soheil Kolouri:
A collective AI via lifelong learning and sharing at the edge. Nat. Mac. Intell. 6(3): 251-264 (2024) - [j9]Outongyi Lv, Bingxin Zhou, Lin F. Yang:
Modeling Bellman-error with logistic distribution with applications in reinforcement learning. Neural Networks 177: 106387 (2024) - [j8]Shuhao Xia, Yuanming Shi, Yong Zhou, Youlong Wu, Lin F. Yang, Khaled B. Letaief:
Federated Learning With Massive Random Access. IEEE Trans. Wirel. Commun. 23(10): 13856-13871 (2024) - [c71]Jiayi Huang, Han Zhong, Liwei Wang, Lin Yang:
Horizon-Free and Instance-Dependent Regret Bounds for Reinforcement Learning with General Function Approximation. AISTATS 2024: 3673-3681 - [c70]Osama A. Hanna, Merve Karakas, Lin Yang, Christina Fragouli:
Multi-Agent Bandit Learning through Heterogeneous Action Erasure Channels. AISTATS 2024: 3898-3906 - [c69]Seungmin Jung, Mianzhi Zhou, Ji Ma, Ryan Yang, Steven C. Cramer, Bruce H. Dobkin, Lin F. Yang, Jacob Rosen:
Wearable Body Sensors Integrated into a Virtual Reality Environment - A Modality for Automating the Rehabilitation of the Motor Control System. EMBC 2024: 1-4 - [c68]Jialin Dong, Jiayi Wang, Lin F. Yang:
Delayed MDPs with Feature Mapping. IJCNN 2024: 1-8 - [c67]Jialin Dong, Jiayi Wang, Lin F. Yang:
Provably Correct SGD-Based Exploration for Generalized Stochastic Bandit Problem. SmartNets 2024: 1-6 - [i79]Jialin Dong, Bahare Fatemi, Bryan Perozzi, Lin F. Yang, Anton Tsitsulin:
Don't Forget to Connect! Improving RAG with Graph-based Reranking. CoRR abs/2405.18414 (2024) - [i78]Osama A. Hanna, Merve Karakas, Lin F. Yang, Christina Fragouli:
Learning for Bandits under Action Erasures. CoRR abs/2406.18072 (2024) - [i77]Tian Tian, Lin F. Yang, Csaba Szepesvári:
Confident Natural Policy Gradient for Local Planning in qπ-realizable Constrained MDPs. CoRR abs/2406.18529 (2024) - [i76]Ally Yalei Du, Lin F. Yang, Ruosong Wang:
Misspecified Q-Learning with Sparse Linear Function Approximation: Tight Bounds on Approximation Error. CoRR abs/2407.13622 (2024) - [i75]Yiran Wang, Chenshu Liu, Yunfan Li, Sanae Amani, Bolei Zhou, Lin F. Yang:
Hyper: Hyperparameter Robust Efficient Exploration in Reinforcement Learning. CoRR abs/2412.03767 (2024) - 2023
- [j7]Kaiqing Zhang, Sham M. Kakade, Tamer Basar, Lin F. Yang:
Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity. J. Mach. Learn. Res. 24: 175:1-175:53 (2023) - [c66]Masoud Monajatipoor, Liunian Harold Li, Mozhdeh Rouhsedaghat, Lin Yang, Kai-Wei Chang:
MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models. ACL (2) 2023: 495-508 - [c65]Osama A. Hanna, Lin Yang, Christina Fragouli:
Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear Bandit Algorithms. COLT 2023: 1791-1821 - [c64]Sanae Amani, Lin Yang, Ching-An Cheng:
Provably Efficient Lifelong Reinforcement Learning with Linear Representation. ICLR 2023 - [c63]Sanae Amani, Tor Lattimore, András György, Lin Yang:
Distributed Contextual Linear Bandits with Minimax Optimal Communication Cost. ICML 2023: 691-717 - [c62]Jialin Dong, Lin Yang:
Does Sparsity Help in Learning Misspecified Linear Bandits? ICML 2023: 8317-8333 - [c61]Osama A. Hanna, Merve Karakas, Lin F. Yang, Christina Fragouli:
Multi-Arm Bandits over Action Erasure Channels. ISIT 2023: 1312-1317 - [c60]Osama A. Hanna, Lin Yang, Christina Fragouli:
Efficient Batched Algorithm for Contextual Linear Bandits with Large Action Space via Soft Elimination. NeurIPS 2023 - [c59]Jiayi Huang, Han Zhong, Liwei Wang, Lin Yang:
Tackling Heavy-Tailed Rewards in Reinforcement Learning with Function Approximation: Minimax Optimal and Instance-Dependent Regret Bounds. NeurIPS 2023 - [c58]Amin Karbasi, Grigoris Velegkas, Lin Yang, Felix Zhou:
Replicability in Reinforcement Learning. NeurIPS 2023 - [i74]Jialin Dong, Lin F. Yang:
Does Sparsity Help in Learning Misspecified Linear Bandits? CoRR abs/2303.16998 (2023) - [i73]Dingwen Kong, Lin F. Yang:
Provably Feedback-Efficient Reinforcement Learning via Active Reward Learning. CoRR abs/2304.08944 (2023) - [i72]Amin Karbasi, Grigoris Velegkas, Lin F. Yang, Felix Zhou:
Replicability in Reinforcement Learning. CoRR abs/2305.19562 (2023) - [i71]Masoud Monajatipoor, Liunian Harold Li, Mozhdeh Rouhsedaghat, Lin F. Yang, Kai-Wei Chang:
MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models. CoRR abs/2306.01311 (2023) - [i70]Jiayi Huang, Han Zhong, Liwei Wang, Lin F. Yang:
Tackling Heavy-Tailed Rewards in Reinforcement Learning with Function Approximation: Minimax Optimal and Instance-Dependent Regret Bounds. CoRR abs/2306.06836 (2023) - [i69]Sanae Amani, Khushbu Pahwa, Vladimir Braverman, Lin F. Yang:
Scaling Distributed Multi-task Reinforcement Learning with Experience Sharing. CoRR abs/2307.05834 (2023) - [i68]Haochen Zhang, Xi Chen, Lin F. Yang:
Adaptive Liquidity Provision in Uniswap V3 with Deep Reinforcement Learning. CoRR abs/2309.10129 (2023) - [i67]Jiayi Huang, Han Zhong, Liwei Wang, Lin F. Yang:
Horizon-Free and Instance-Dependent Regret Bounds for Reinforcement Learning with General Function Approximation. CoRR abs/2312.04464 (2023) - [i66]Osama A. Hanna, Merve Karakas, Lin F. Yang, Christina Fragouli:
Multi-Agent Bandit Learning through Heterogeneous Action Erasure Channels. CoRR abs/2312.14259 (2023) - 2022
- [j6]Osama A. Hanna, Lin F. Yang, Christina Fragouli:
Compression for Multi-Arm Bandits. IEEE J. Sel. Areas Inf. Theory 3(4): 773-788 (2022) - [j5]Vladimir Braverman, Robert Krauthgamer, Lin F. Yang:
Universal Streaming of Subset Norms. Adv. Math. Commun. 18: 1-32 (2022) - [c57]Jingfeng Wu, Vladimir Braverman, Lin Yang:
Gap-Dependent Unsupervised Exploration for Reinforcement Learning. AISTATS 2022: 4109-4131 - [c56]Osama A. Hanna, Lin Yang, Christina Fragouli:
Solving Multi-Arm Bandit Using a Few Bits of Communication. AISTATS 2022: 11215-11236 - [c55]Sanae Amani, Lin F. Yang:
Doubly Pessimistic Algorithms for Strictly Safe Off-Policy Optimization. CISS 2022: 113-118 - [c54]Xiaoyu Chen, Jiachen Hu, Lin Yang, Liwei Wang:
Near-Optimal Reward-Free Exploration for Linear Mixture MDPs with Plug-in Solver. ICLR 2022 - [c53]Weichao Mao, Lin Yang, Kaiqing Zhang, Tamer Basar:
On Improving Model-Free Algorithms for Decentralized Multi-Agent Reinforcement Learning. ICML 2022: 15007-15049 - [c52]Osama A. Hanna, Lin Yang, Christina Fragouli:
Learning from Distributed Users in Contextual Linear Bandits Without Sharing the Context. NeurIPS 2022 - [c51]Dingwen Kong, Lin Yang:
Provably Feedback-Efficient Reinforcement Learning via Active Reward Learning. NeurIPS 2022 - [c50]Sharan Vaswani, Lin Yang, Csaba Szepesvári:
Near-Optimal Sample Complexity Bounds for Constrained MDPs. NeurIPS 2022 - [i65]Sanae Amani, Tor Lattimore, András György, Lin F. Yang:
Distributed Contextual Linear Bandits with Minimax Optimal Communication Cost. CoRR abs/2205.13170 (2022) - [i64]Sanae Amani, Lin F. Yang, Ching-An Cheng:
Provably Efficient Lifelong Reinforcement Learning with Linear Function Approximation. CoRR abs/2206.00270 (2022) - [i63]Osama A. Hanna, Lin F. Yang, Christina Fragouli:
Learning in Distributed Contextual Linear Bandits Without Sharing the Context. CoRR abs/2206.04180 (2022) - [i62]Sharan Vaswani, Lin F. Yang, Csaba Szepesvári:
Near-Optimal Sample Complexity Bounds for Constrained MDPs. CoRR abs/2206.06270 (2022) - [i61]Ningyuan Huang, Soledad Villar, Carey E. Priebe, Da Zheng, Chengyue Huang, Lin Yang, Vladimir Braverman:
From Local to Global: Spectral-Inspired Graph Neural Networks. CoRR abs/2209.12054 (2022) - [i60]Osama A. Hanna, Lin F. Yang, Christina Fragouli:
Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear Bandit Algorithms. CoRR abs/2211.05632 (2022) - [i59]Jinghan Wang, Mengdi Wang, Lin F. Yang:
Near Sample-Optimal Reduction-based Policy Learning for Average Reward MDP. CoRR abs/2212.00603 (2022) - 2021
- [c49]Junhong Shen, Lin F. Yang:
Theoretically Principled Deep RL Acceleration via Nearest Neighbor Function Approximation. AAAI 2021: 9558-9566 - [c48]Kunhe Yang, Lin F. Yang, Simon S. Du:
Q-learning with Logarithmic Regret. AISTATS 2021: 1576-1584 - [c47]Yuanzhi Li, Ruosong Wang, Lin F. Yang:
Settling the Horizon-Dependence of Sample Complexity in Reinforcement Learning. FOCS 2021: 965-976 - [c46]Sanae Amani, Christos Thrampoulidis, Lin Yang:
Safe Reinforcement Learning with Linear Function Approximation. ICML 2021: 243-253 - [c45]Fei Feng, Wotao Yin, Alekh Agarwal, Lin Yang:
Provably Correct Optimization and Exploration with Non-linear Policies. ICML 2021: 3263-3273 - [c44]Haque Ishfaq, Qiwen Cui, Viet Nguyen, Alex Ayoub, Zhuoran Yang, Zhaoran Wang, Doina Precup, Lin Yang:
Randomized Exploration in Reinforcement Learning with General Value Function Approximation. ICML 2021: 4607-4616 - [c43]Jialin Dong, Da Zheng, Lin F. Yang, George Karypis:
Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs. KDD 2021: 289-299 - [c42]Nived Rajaraman, Yanjun Han, Lin Yang, Jingbo Liu, Jiantao Jiao, Kannan Ramchandran:
On the Value of Interaction and Function Approximation in Imitation Learning. NeurIPS 2021: 1325-1336 - [c41]Jingfeng Wu, Vladimir Braverman, Lin Yang:
Accommodating Picky Customers: Regret Bound and Exploration Complexity for Multi-Objective Reinforcement Learning. NeurIPS 2021: 13112-13124 - [c40]Han Zhong, Jiayi Huang, Lin Yang, Liwei Wang:
Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs. NeurIPS 2021: 15710-15720 - [c39]Qiwen Cui, Lin F. Yang:
Minimax sample complexity for turn-based stochastic game. UAI 2021: 1496-1504 - [i58]Minbo Gao, Tianle Xie, Simon S. Du, Lin F. Yang:
A Provably Efficient Algorithm for Linear Markov Decision Process with Low Switching Cost. CoRR abs/2101.00494 (2021) - [i57]Nived Rajaraman, Yanjun Han, Lin F. Yang, Kannan Ramchandran, Jiantao Jiao:
Provably Breaking the Quadratic Error Compounding Barrier in Imitation Learning, Optimally. CoRR abs/2102.12948 (2021) - [i56]Fei Feng, Wotao Yin, Alekh Agarwal, Lin F. Yang:
Provably Correct Optimization and Exploration with Non-linear Policies. CoRR abs/2103.11559 (2021) - [i55]Jialin Dong, Da Zheng, Lin F. Yang, George Karypis:
Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs. CoRR abs/2106.06150 (2021) - [i54]Sanae Amani, Christos Thrampoulidis, Lin F. Yang:
Safe Reinforcement Learning with Linear Function Approximation. CoRR abs/2106.06239 (2021) - [i53]Dingwen Kong, Ruslan Salakhutdinov, Ruosong Wang, Lin F. Yang:
Online Sub-Sampling for Reinforcement Learning with General Function Approximation. CoRR abs/2106.07203 (2021) - [i52]Haque Ishfaq, Qiwen Cui, Viet Nguyen, Alex Ayoub, Zhuoran Yang, Zhaoran Wang, Doina Precup, Lin F. Yang:
Randomized Exploration for Reinforcement Learning with General Value Function Approximation. CoRR abs/2106.07841 (2021) - [i51]Jingfeng Wu, Vladimir Braverman, Lin F. Yang:
Gap-Dependent Unsupervised Exploration for Reinforcement Learning. CoRR abs/2108.05439 (2021) - [i50]Xiaoyu Chen, Jiachen Hu, Lin F. Yang, Liwei Wang:
Near-Optimal Reward-Free Exploration for Linear Mixture MDPs with Plug-in Solver. CoRR abs/2110.03244 (2021) - [i49]Junhong Shen, Lin F. Yang:
Theoretically Principled Deep RL Acceleration via Nearest Neighbor Function Approximation. CoRR abs/2110.04422 (2021) - [i48]Weichao Mao, Tamer Basar, Lin F. Yang, Kaiqing Zhang:
Decentralized Cooperative Multi-Agent Reinforcement Learning with Exploration. CoRR abs/2110.05707 (2021) - [i47]Han Zhong, Jiayi Huang, Lin F. Yang, Liwei Wang:
Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs. CoRR abs/2110.13876 (2021) - [i46]Yuanzhi Li, Ruosong Wang, Lin F. Yang:
Settling the Horizon-Dependence of Sample Complexity in Reinforcement Learning. CoRR abs/2111.00633 (2021) - [i45]Osama A. Hanna, Lin F. Yang, Christina Fragouli:
Solving Multi-Arm Bandit Using a Few Bits of Communication. CoRR abs/2111.06067 (2021) - 2020
- [j4]Vladimir Braverman, Moses Charikar, William Kuszmaul, Lin F. Yang:
The one-way communication complexity of dynamic time warping distance. J. Comput. Geom. 11(2): 62-93 (2020) - [c38]Yingyu Liang, Zhao Song, Mengdi Wang, Lin Yang, Xin Yang:
Sketching Transformed Matrices with Applications to Natural Language Processing. AISTATS 2020: 467-481 - [c37]Aaron Sidford, Mengdi Wang, Lin Yang, Yinyu Ye:
Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity. AISTATS 2020: 2992-3002 - [c36]Alekh Agarwal, Sham M. Kakade, Lin F. Yang:
Model-Based Reinforcement Learning with a Generative Model is Minimax Optimal. COLT 2020: 67-83 - [c35]Simon S. Du, Sham M. Kakade, Ruosong Wang, Lin F. Yang:
Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning? ICLR 2020 - [c34]Alex Ayoub, Zeyu Jia, Csaba Szepesvári, Mengdi Wang, Lin Yang:
Model-Based Reinforcement Learning with Value-Targeted Regression. ICML 2020: 463-474 - [c33]Jingfeng Wu, Vladimir Braverman, Lin Yang:
Obtaining Adjustable Regularization for Free via Iterate Averaging. ICML 2020: 10344-10354 - [c32]Lin Yang, Mengdi Wang:
Reinforcement Learning in Feature Space: Matrix Bandit, Kernels, and Regret Bound. ICML 2020: 10746-10756 - [c31]Zeyu Jia, Lin Yang, Csaba Szepesvári, Mengdi Wang:
Model-Based Reinforcement Learning with Value-Targeted Regression. L4DC 2020: 666-686 - [c30]Qiwen Cui, Lin F. Yang:
Is Plug-in Solver Sample-Efficient for Feature-based Reinforcement Learning? NeurIPS 2020 - [c29]Fei Feng, Ruosong Wang, Wotao Yin, Simon S. Du, Lin F. Yang:
Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning. NeurIPS 2020 - [c28]Nived Rajaraman, Lin F. Yang, Jiantao Jiao, Kannan Ramchandran:
Toward the Fundamental Limits of Imitation Learning. NeurIPS 2020 - [c27]Ruosong Wang, Simon S. Du, Lin F. Yang, Sham M. Kakade:
Is Long Horizon RL More Difficult Than Short Horizon RL? NeurIPS 2020 - [c26]Ruosong Wang, Simon S. Du, Lin F. Yang, Ruslan Salakhutdinov:
On Reward-Free Reinforcement Learning with Linear Function Approximation. NeurIPS 2020 - [c25]Ruosong Wang, Ruslan Salakhutdinov, Lin F. Yang:
Reinforcement Learning with General Value Function Approximation: Provably Efficient Approach via Bounded Eluder Dimension. NeurIPS 2020 - [c24]Ruosong Wang, Peilin Zhong, Simon S. Du, Ruslan Salakhutdinov, Lin F. Yang:
Planning with General Objective Functions: Going Beyond Total Rewards. NeurIPS 2020 - [c23]Yichong Xu, Ruosong Wang, Lin F. Yang, Aarti Singh, Artur Dubrawski:
Preference-based Reinforcement Learning with Finite-Time Guarantees. NeurIPS 2020 - [c22]Kaiqing Zhang, Sham M. Kakade, Tamer Basar, Lin F. Yang:
Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity. NeurIPS 2020 - [i44]Yingyu Liang, Zhao Song, Mengdi Wang, Lin F. Yang, Xin Yang:
Sketching Transformed Matrices with Applications to Natural Language Processing. CoRR abs/2002.09812 (2020) - [i43]Gabriel I. Fernandez, Colin Togashi, Dennis W. Hong, Lin F. Yang:
Deep Reinforcement Learning with Linear Quadratic Regulator Regions. CoRR abs/2002.09820 (2020) - [i42]Fei Feng, Ruosong Wang, Wotao Yin, Simon S. Du, Lin F. Yang:
Provably Efficient Exploration for RL with Unsupervised Learning. CoRR abs/2003.06898 (2020) - [i41]Ruosong Wang, Simon S. Du, Lin F. Yang, Sham M. Kakade:
Is Long Horizon Reinforcement Learning More Difficult Than Short Horizon Reinforcement Learning? CoRR abs/2005.00527 (2020) - [i40]Ruosong Wang, Ruslan Salakhutdinov, Lin F. Yang:
Provably Efficient Reinforcement Learning with General Value Function Approximation. CoRR abs/2005.10804 (2020) - [i39]Alex Ayoub, Zeyu Jia, Csaba Szepesvári, Mengdi Wang, Lin F. Yang:
Model-Based Reinforcement Learning with Value-Targeted Regression. CoRR abs/2006.01107 (2020) - [i38]Yichong Xu, Ruosong Wang, Lin F. Yang, Aarti Singh, Artur Dubrawski:
Preference-based Reinforcement Learning with Finite-Time Guarantees. CoRR abs/2006.08910 (2020) - [i37]Kunhe Yang, Lin F. Yang, Simon S. Du:
Q-learning with Logarithmic Regret. CoRR abs/2006.09118 (2020) - [i36]Ruosong Wang, Simon S. Du, Lin F. Yang, Ruslan Salakhutdinov:
On Reward-Free Reinforcement Learning with Linear Function Approximation. CoRR abs/2006.11274 (2020) - [i35]Kaiqing Zhang, Sham M. Kakade, Tamer Basar, Lin F. Yang:
Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity. CoRR abs/2007.07461 (2020) - [i34]Jingfeng Wu, Vladimir Braverman, Lin F. Yang:
Obtaining Adjustable Regularization for Free via Iterate Averaging. CoRR abs/2008.06736 (2020) - [i33]Nived Rajaraman, Lin F. Yang, Jiantao Jiao, Kannan Ramchandran:
Toward the Fundamental Limits of Imitation Learning. CoRR abs/2009.05990 (2020) - [i32]Qiwen Cui, Lin F. Yang:
Is Plug-in Solver Sample-Efficient for Feature-based Reinforcement Learning? CoRR abs/2010.05673 (2020) - [i31]Tianyu Wang, Lin F. Yang, Zizhuo Wang:
Random Walk Bandits. CoRR abs/2011.01445 (2020) - [i30]Tianyu Wang, Lin F. Yang:
Episodic Linear Quadratic Regulators with Low-rank Transitions. CoRR abs/2011.01568 (2020) - [i29]Jingfeng Wu, Vladimir Braverman, Lin F. Yang:
Accommodating Picky Customers: Regret Bound and Exploration Complexity for Multi-Objective Reinforcement Learning. CoRR abs/2011.13034 (2020) - [i28]Qiwen Cui, Lin F. Yang:
Minimax Sample Complexity for Turn-based Stochastic Game. CoRR abs/2011.14267 (2020)
2010 – 2019
- 2019
- [j3]Zhuoran Yang, Lin F. Yang, Ethan X. Fang, Tuo Zhao, Zhaoran Wang, Matey Neykov:
Misspecified nonconvex statistical optimization for sparse phase retrieval. Math. Program. 176(1-2): 545-571 (2019) - [c21]Yibo Lin, Zhao Song, Lin F. Yang:
Towards a Theoretical Understanding of Hashing-Based Neural Nets. AISTATS 2019: 127-137 - [c20]Zhehui Chen, Xingguo Li, Lin Yang, Jarvis D. Haupt, Tuo Zhao:
On Constrained Nonconvex Stochastic Optimization: A Case Study for Generalized Eigenvalue Decomposition. AISTATS 2019: 916-925 - [c19]Chengzhuo Ni, Lin F. Yang, Mengdi Wang:
Learning to Control in Metric Space with Optimal Regret. Allerton 2019: 726-733 - [c18]Vladimir Braverman, Moses Charikar, William Kuszmaul, David P. Woodruff, Lin F. Yang:
The One-Way Communication Complexity of Dynamic Time Warping Distance. SoCG 2019: 16:1-16:15 - [c17]Lin Yang, Mengdi Wang:
Sample-Optimal Parametric Q-Learning Using Linearly Additive Features. ICML 2019: 6995-7004 - [c16]Zhao Song, Ruosong Wang, Lin F. Yang, Hongyang Zhang, Peilin Zhong:
Efficient Symmetric Norm Regression via Linear Sketching. NeurIPS 2019: 828-838 - [c15]Lin F. Yang, Zheng Yu, Vladimir Braverman, Tuo Zhao, Mengdi Wang:
Online Factorization and Partition of Complex Networks by Random Walk. UAI 2019: 820-830 - [i27]Lin F. Yang, Mengdi Wang:
Sample-Optimal Parametric Q-Learning with Linear Transition Models. CoRR abs/1902.04779 (2019) - [i26]Vladimir Braverman, Moses Charikar, William Kuszmaul, David P. Woodruff, Lin F. Yang:
The One-Way Communication Complexity of Dynamic Time Warping Distance. CoRR abs/1903.03520 (2019) - [i25]Lin F. Yang, Chengzhuo Ni, Mengdi Wang:
Learning to Control in Metric Space with Optimal Regret. CoRR abs/1905.01576 (2019) - [i24]Lin F. Yang, Mengdi Wang:
Reinforcement Leaning in Feature Space: Matrix Bandit, Kernels, and Regret Bound. CoRR abs/1905.10389 (2019) - [i23]Zeyu Jia, Lin F. Yang, Mengdi Wang:
Feature-Based Q-Learning for Two-Player Stochastic Games. CoRR abs/1906.00423 (2019) - [i22]Alekh Agarwal, Sham M. Kakade, Lin F. Yang:
On the Optimality of Sparse Model-Based Planning for Markov Decision Processes. CoRR abs/1906.03804 (2019) - [i21]Aaron Sidford,