


Остановите войну!
for scientists:


default search action
Lin Yang 0011
Lin F. Yang
Person information

- affiliation: University of California, Los Angeles, CA, USA
Other persons with the same name
- Lin Yang — disambiguation page
- Lin Yang 0001 — University of Manchester, UK
- Lin Yang 0002 — University of Florida, Gainesville, FL, USA
- Lin Yang 0003
— University of Notre Dame, IN, USA
- Lin Yang 0004
— University of Electronic Science and Technology of China, National Key Laboratory of Science and Technology on Communications, Chengdu, China
- Lin Yang 0005
— Wuhan University, School of Electrical Engineering and Automation, China
- Lin Yang 0006
— Xidian University, State Key Discipline Laboratory of Wide Bandgap Semiconductor Technology, Xian, China
- Lin Yang 0007
— China University of Geoscience, Faculty of Information Engineering, Wuhan, China
- Lin Yang 0008
— Wuhan University of Technology, School of Logistics Engineering, China
- Lin Yang 0009 — Huawei Noah's Ark Lab, China (and 1 more)
- Lin Yang 0010 — University of Minnesota, Department of Electrical and Computer Engineering, MN, USA
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [c60]Masoud Monajatipoor, Liunian Harold Li, Mozhdeh Rouhsedaghat, Lin Yang, Kai-Wei Chang:
MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models. ACL (2) 2023: 495-508 - [c59]Osama A. Hanna, Lin Yang, Christina Fragouli:
Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear Bandit Algorithms. COLT 2023: 1791-1821 - [c58]Sanae Amani, Lin Yang, Ching-An Cheng:
Provably Efficient Lifelong Reinforcement Learning with Linear Representation. ICLR 2023 - [c57]Sanae Amani, Tor Lattimore, András György, Lin Yang:
Distributed Contextual Linear Bandits with Minimax Optimal Communication Cost. ICML 2023: 691-717 - [c56]Jialin Dong, Lin Yang:
Does Sparsity Help in Learning Misspecified Linear Bandits? ICML 2023: 8317-8333 - [c55]Osama A. Hanna, Merve Karakas, Lin F. Yang, Christina Fragouli:
Multi-Arm Bandits over Action Erasure Channels. ISIT 2023: 1312-1317 - [i72]Jialin Dong, Lin F. Yang:
Does Sparsity Help in Learning Misspecified Linear Bandits? CoRR abs/2303.16998 (2023) - [i71]Dingwen Kong, Lin F. Yang:
Provably Feedback-Efficient Reinforcement Learning via Active Reward Learning. CoRR abs/2304.08944 (2023) - [i70]Amin Karbasi, Grigoris Velegkas, Lin F. Yang, Felix Zhou:
Replicability in Reinforcement Learning. CoRR abs/2305.19562 (2023) - [i69]Masoud Monajatipoor, Liunian Harold Li, Mozhdeh Rouhsedaghat, Lin F. Yang, Kai-Wei Chang:
MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models. CoRR abs/2306.01311 (2023) - [i68]Jiayi Huang, Han Zhong, Liwei Wang, Lin F. Yang:
Tackling Heavy-Tailed Rewards in Reinforcement Learning with Function Approximation: Minimax Optimal and Instance-Dependent Regret Bounds. CoRR abs/2306.06836 (2023) - [i67]Sanae Amani, Khushbu Pahwa, Vladimir Braverman, Lin F. Yang:
Scaling Distributed Multi-task Reinforcement Learning with Experience Sharing. CoRR abs/2307.05834 (2023) - [i66]Haochen Zhang, Xi Chen, Lin F. Yang:
Adaptive Liquidity Provision in Uniswap V3 with Deep Reinforcement Learning. CoRR abs/2309.10129 (2023) - 2022
- [j6]Osama A. Hanna
, Lin F. Yang, Christina Fragouli
:
Compression for Multi-Arm Bandits. IEEE J. Sel. Areas Inf. Theory 3(4): 773-788 (2022) - [j5]Vladimir Braverman, Robert Krauthgamer, Lin F. Yang:
Universal Streaming of Subset Norms. Theory Comput. 18: 1-32 (2022) - [c54]Jingfeng Wu, Vladimir Braverman, Lin Yang
:
Gap-Dependent Unsupervised Exploration for Reinforcement Learning. AISTATS 2022: 4109-4131 - [c53]Osama A. Hanna, Lin Yang
, Christina Fragouli:
Solving Multi-Arm Bandit Using a Few Bits of Communication. AISTATS 2022: 11215-11236 - [c52]Sanae Amani, Lin F. Yang
:
Doubly Pessimistic Algorithms for Strictly Safe Off-Policy Optimization. CISS 2022: 113-118 - [c51]Xiaoyu Chen, Jiachen Hu, Lin Yang
, Liwei Wang:
Near-Optimal Reward-Free Exploration for Linear Mixture MDPs with Plug-in Solver. ICLR 2022 - [c50]Weichao Mao, Lin Yang
, Kaiqing Zhang, Tamer Basar:
On Improving Model-Free Algorithms for Decentralized Multi-Agent Reinforcement Learning. ICML 2022: 15007-15049 - [c49]Osama A. Hanna, Lin Yang, Christina Fragouli:
Learning from Distributed Users in Contextual Linear Bandits Without Sharing the Context. NeurIPS 2022 - [c48]Dingwen Kong, Lin Yang:
Provably Feedback-Efficient Reinforcement Learning via Active Reward Learning. NeurIPS 2022 - [c47]Sharan Vaswani, Lin Yang, Csaba Szepesvári:
Near-Optimal Sample Complexity Bounds for Constrained MDPs. NeurIPS 2022 - [i65]Sanae Amani, Tor Lattimore, András György, Lin F. Yang
:
Distributed Contextual Linear Bandits with Minimax Optimal Communication Cost. CoRR abs/2205.13170 (2022) - [i64]Sanae Amani, Lin F. Yang
, Ching-An Cheng:
Provably Efficient Lifelong Reinforcement Learning with Linear Function Approximation. CoRR abs/2206.00270 (2022) - [i63]Osama A. Hanna, Lin F. Yang
, Christina Fragouli:
Learning in Distributed Contextual Linear Bandits Without Sharing the Context. CoRR abs/2206.04180 (2022) - [i62]Sharan Vaswani, Lin F. Yang
, Csaba Szepesvári:
Near-Optimal Sample Complexity Bounds for Constrained MDPs. CoRR abs/2206.06270 (2022) - [i61]Ningyuan Huang, Soledad Villar, Carey E. Priebe, Da Zheng, Chengyue Huang, Lin Yang
, Vladimir Braverman:
From Local to Global: Spectral-Inspired Graph Neural Networks. CoRR abs/2209.12054 (2022) - [i60]Osama A. Hanna, Lin F. Yang
, Christina Fragouli:
Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear Bandit Algorithms. CoRR abs/2211.05632 (2022) - [i59]Jinghan Wang, Mengdi Wang, Lin F. Yang
:
Near Sample-Optimal Reduction-based Policy Learning for Average Reward MDP. CoRR abs/2212.00603 (2022) - 2021
- [c46]Junhong Shen, Lin F. Yang:
Theoretically Principled Deep RL Acceleration via Nearest Neighbor Function Approximation. AAAI 2021: 9558-9566 - [c45]Kunhe Yang, Lin F. Yang
, Simon S. Du:
Q-learning with Logarithmic Regret. AISTATS 2021: 1576-1584 - [c44]Yuanzhi Li, Ruosong Wang, Lin F. Yang
:
Settling the Horizon-Dependence of Sample Complexity in Reinforcement Learning. FOCS 2021: 965-976 - [c43]Sanae Amani, Christos Thrampoulidis, Lin Yang
:
Safe Reinforcement Learning with Linear Function Approximation. ICML 2021: 243-253 - [c42]Fei Feng, Wotao Yin, Alekh Agarwal, Lin Yang
:
Provably Correct Optimization and Exploration with Non-linear Policies. ICML 2021: 3263-3273 - [c41]Jialin Dong, Da Zheng, Lin F. Yang
, George Karypis:
Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs. KDD 2021: 289-299 - [c40]Nived Rajaraman, Yanjun Han, Lin Yang
, Jingbo Liu, Jiantao Jiao, Kannan Ramchandran:
On the Value of Interaction and Function Approximation in Imitation Learning. NeurIPS 2021: 1325-1336 - [c39]Jingfeng Wu, Vladimir Braverman, Lin Yang
:
Accommodating Picky Customers: Regret Bound and Exploration Complexity for Multi-Objective Reinforcement Learning. NeurIPS 2021: 13112-13124 - [c38]Han Zhong, Jiayi Huang, Lin Yang
, Liwei Wang:
Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs. NeurIPS 2021: 15710-15720 - [c37]Qiwen Cui, Lin F. Yang
:
Minimax sample complexity for turn-based stochastic game. UAI 2021: 1496-1504 - [i58]Minbo Gao, Tianle Xie, Simon S. Du, Lin F. Yang:
A Provably Efficient Algorithm for Linear Markov Decision Process with Low Switching Cost. CoRR abs/2101.00494 (2021) - [i57]Nived Rajaraman, Yanjun Han, Lin F. Yang, Kannan Ramchandran, Jiantao Jiao:
Provably Breaking the Quadratic Error Compounding Barrier in Imitation Learning, Optimally. CoRR abs/2102.12948 (2021) - [i56]Fei Feng, Wotao Yin, Alekh Agarwal, Lin F. Yang:
Provably Correct Optimization and Exploration with Non-linear Policies. CoRR abs/2103.11559 (2021) - [i55]Jialin Dong, Da Zheng, Lin F. Yang, George Karypis:
Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs. CoRR abs/2106.06150 (2021) - [i54]Sanae Amani, Christos Thrampoulidis, Lin F. Yang:
Safe Reinforcement Learning with Linear Function Approximation. CoRR abs/2106.06239 (2021) - [i53]Dingwen Kong, Ruslan Salakhutdinov, Ruosong Wang, Lin F. Yang:
Online Sub-Sampling for Reinforcement Learning with General Function Approximation. CoRR abs/2106.07203 (2021) - [i52]Haque Ishfaq, Qiwen Cui, Viet Nguyen, Alex Ayoub, Zhuoran Yang, Zhaoran Wang, Doina Precup, Lin F. Yang:
Randomized Exploration for Reinforcement Learning with General Value Function Approximation. CoRR abs/2106.07841 (2021) - [i51]Jingfeng Wu, Vladimir Braverman, Lin F. Yang:
Gap-Dependent Unsupervised Exploration for Reinforcement Learning. CoRR abs/2108.05439 (2021) - [i50]Xiaoyu Chen, Jiachen Hu, Lin F. Yang, Liwei Wang:
Near-Optimal Reward-Free Exploration for Linear Mixture MDPs with Plug-in Solver. CoRR abs/2110.03244 (2021) - [i49]Junhong Shen, Lin F. Yang:
Theoretically Principled Deep RL Acceleration via Nearest Neighbor Function Approximation. CoRR abs/2110.04422 (2021) - [i48]Weichao Mao, Tamer Basar, Lin F. Yang, Kaiqing Zhang:
Decentralized Cooperative Multi-Agent Reinforcement Learning with Exploration. CoRR abs/2110.05707 (2021) - [i47]Han Zhong, Jiayi Huang, Lin F. Yang, Liwei Wang:
Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs. CoRR abs/2110.13876 (2021) - [i46]Yuanzhi Li, Ruosong Wang, Lin F. Yang:
Settling the Horizon-Dependence of Sample Complexity in Reinforcement Learning. CoRR abs/2111.00633 (2021) - [i45]Osama A. Hanna, Lin F. Yang, Christina Fragouli:
Solving Multi-Arm Bandit Using a Few Bits of Communication. CoRR abs/2111.06067 (2021) - 2020
- [j4]Vladimir Braverman, Moses Charikar
, William Kuszmaul, Lin F. Yang
:
The one-way communication complexity of dynamic time warping distance. J. Comput. Geom. 11(2): 62-93 (2020) - [c36]Yingyu Liang, Zhao Song, Mengdi Wang, Lin Yang
, Xin Yang:
Sketching Transformed Matrices with Applications to Natural Language Processing. AISTATS 2020: 467-481 - [c35]Aaron Sidford, Mengdi Wang, Lin Yang
, Yinyu Ye:
Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity. AISTATS 2020: 2992-3002 - [c34]Alekh Agarwal, Sham M. Kakade, Lin F. Yang
:
Model-Based Reinforcement Learning with a Generative Model is Minimax Optimal. COLT 2020: 67-83 - [c33]Simon S. Du, Sham M. Kakade, Ruosong Wang, Lin F. Yang
:
Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning? ICLR 2020 - [c32]Alex Ayoub, Zeyu Jia, Csaba Szepesvári, Mengdi Wang, Lin Yang
:
Model-Based Reinforcement Learning with Value-Targeted Regression. ICML 2020: 463-474 - [c31]Jingfeng Wu, Vladimir Braverman, Lin Yang
:
Obtaining Adjustable Regularization for Free via Iterate Averaging. ICML 2020: 10344-10354 - [c30]Lin Yang
, Mengdi Wang:
Reinforcement Learning in Feature Space: Matrix Bandit, Kernels, and Regret Bound. ICML 2020: 10746-10756 - [c29]Zeyu Jia, Lin Yang
, Csaba Szepesvári, Mengdi Wang:
Model-Based Reinforcement Learning with Value-Targeted Regression. L4DC 2020: 666-686 - [c28]Qiwen Cui, Lin F. Yang
:
Is Plug-in Solver Sample-Efficient for Feature-based Reinforcement Learning? NeurIPS 2020 - [c27]Fei Feng, Ruosong Wang, Wotao Yin, Simon S. Du, Lin F. Yang
:
Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning. NeurIPS 2020 - [c26]Nived Rajaraman, Lin F. Yang
, Jiantao Jiao, Kannan Ramchandran:
Toward the Fundamental Limits of Imitation Learning. NeurIPS 2020 - [c25]Ruosong Wang, Simon S. Du, Lin F. Yang
, Sham M. Kakade:
Is Long Horizon RL More Difficult Than Short Horizon RL? NeurIPS 2020 - [c24]Ruosong Wang, Simon S. Du, Lin F. Yang
, Ruslan Salakhutdinov:
On Reward-Free Reinforcement Learning with Linear Function Approximation. NeurIPS 2020 - [c23]Ruosong Wang, Ruslan Salakhutdinov, Lin F. Yang
:
Reinforcement Learning with General Value Function Approximation: Provably Efficient Approach via Bounded Eluder Dimension. NeurIPS 2020 - [c22]Ruosong Wang, Peilin Zhong, Simon S. Du, Ruslan Salakhutdinov, Lin F. Yang
:
Planning with General Objective Functions: Going Beyond Total Rewards. NeurIPS 2020 - [c21]Yichong Xu, Ruosong Wang, Lin F. Yang
, Aarti Singh, Artur Dubrawski:
Preference-based Reinforcement Learning with Finite-Time Guarantees. NeurIPS 2020 - [c20]Kaiqing Zhang, Sham M. Kakade, Tamer Basar, Lin F. Yang
:
Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity. NeurIPS 2020 - [i44]Yingyu Liang, Zhao Song, Mengdi Wang, Lin F. Yang, Xin Yang:
Sketching Transformed Matrices with Applications to Natural Language Processing. CoRR abs/2002.09812 (2020) - [i43]Gabriel I. Fernandez, Colin Togashi, Dennis W. Hong, Lin F. Yang:
Deep Reinforcement Learning with Linear Quadratic Regulator Regions. CoRR abs/2002.09820 (2020) - [i42]Fei Feng, Ruosong Wang, Wotao Yin, Simon S. Du, Lin F. Yang:
Provably Efficient Exploration for RL with Unsupervised Learning. CoRR abs/2003.06898 (2020) - [i41]Ruosong Wang, Simon S. Du, Lin F. Yang, Sham M. Kakade:
Is Long Horizon Reinforcement Learning More Difficult Than Short Horizon Reinforcement Learning? CoRR abs/2005.00527 (2020) - [i40]Ruosong Wang, Ruslan Salakhutdinov, Lin F. Yang:
Provably Efficient Reinforcement Learning with General Value Function Approximation. CoRR abs/2005.10804 (2020) - [i39]Alex Ayoub, Zeyu Jia, Csaba Szepesvári, Mengdi Wang
, Lin F. Yang:
Model-Based Reinforcement Learning with Value-Targeted Regression. CoRR abs/2006.01107 (2020) - [i38]Yichong Xu, Ruosong Wang, Lin F. Yang, Aarti Singh, Artur Dubrawski:
Preference-based Reinforcement Learning with Finite-Time Guarantees. CoRR abs/2006.08910 (2020) - [i37]Kunhe Yang, Lin F. Yang, Simon S. Du:
Q-learning with Logarithmic Regret. CoRR abs/2006.09118 (2020) - [i36]Ruosong Wang, Simon S. Du, Lin F. Yang, Ruslan Salakhutdinov:
On Reward-Free Reinforcement Learning with Linear Function Approximation. CoRR abs/2006.11274 (2020) - [i35]Kaiqing Zhang, Sham M. Kakade, Tamer Basar, Lin F. Yang:
Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity. CoRR abs/2007.07461 (2020) - [i34]Jingfeng Wu, Vladimir Braverman, Lin F. Yang:
Obtaining Adjustable Regularization for Free via Iterate Averaging. CoRR abs/2008.06736 (2020) - [i33]Nived Rajaraman, Lin F. Yang, Jiantao Jiao, Kannan Ramchandran:
Toward the Fundamental Limits of Imitation Learning. CoRR abs/2009.05990 (2020) - [i32]Qiwen Cui, Lin F. Yang:
Is Plug-in Solver Sample-Efficient for Feature-based Reinforcement Learning? CoRR abs/2010.05673 (2020) - [i31]Tianyu Wang, Lin F. Yang, Zizhuo Wang:
Random Walk Bandits. CoRR abs/2011.01445 (2020) - [i30]Tianyu Wang, Lin F. Yang:
Episodic Linear Quadratic Regulators with Low-rank Transitions. CoRR abs/2011.01568 (2020) - [i29]Jingfeng Wu, Vladimir Braverman, Lin F. Yang:
Accommodating Picky Customers: Regret Bound and Exploration Complexity for Multi-Objective Reinforcement Learning. CoRR abs/2011.13034 (2020) - [i28]Qiwen Cui, Lin F. Yang:
Minimax Sample Complexity for Turn-based Stochastic Game. CoRR abs/2011.14267 (2020)
2010 – 2019
- 2019
- [j3]Zhuoran Yang, Lin F. Yang, Ethan X. Fang, Tuo Zhao, Zhaoran Wang, Matey Neykov:
Misspecified nonconvex statistical optimization for sparse phase retrieval. Math. Program. 176(1-2): 545-571 (2019) - [c19]Yibo Lin, Zhao Song, Lin F. Yang
:
Towards a Theoretical Understanding of Hashing-Based Neural Nets. AISTATS 2019: 127-137 - [c18]Chengzhuo Ni, Lin F. Yang
, Mengdi Wang:
Learning to Control in Metric Space with Optimal Regret. Allerton 2019: 726-733 - [c17]Vladimir Braverman, Moses Charikar
, William Kuszmaul, David P. Woodruff, Lin F. Yang
:
The One-Way Communication Complexity of Dynamic Time Warping Distance. SoCG 2019: 16:1-16:15 - [c16]Lin Yang
, Mengdi Wang:
Sample-Optimal Parametric Q-Learning Using Linearly Additive Features. ICML 2019: 6995-7004 - [c15]Zhao Song, Ruosong Wang, Lin F. Yang
, Hongyang Zhang, Peilin Zhong:
Efficient Symmetric Norm Regression via Linear Sketching. NeurIPS 2019: 828-838 - [c14]Lin F. Yang
, Zheng Yu, Vladimir Braverman, Tuo Zhao, Mengdi Wang:
Online Factorization and Partition of Complex Networks by Random Walk. UAI 2019: 820-830 - [i27]Lin F. Yang, Mengdi Wang:
Sample-Optimal Parametric Q-Learning with Linear Transition Models. CoRR abs/1902.04779 (2019) - [i26]Vladimir Braverman, Moses Charikar, William Kuszmaul, David P. Woodruff, Lin F. Yang:
The One-Way Communication Complexity of Dynamic Time Warping Distance. CoRR abs/1903.03520 (2019) - [i25]Lin F. Yang, Chengzhuo Ni, Mengdi Wang:
Learning to Control in Metric Space with Optimal Regret. CoRR abs/1905.01576 (2019) - [i24]Lin F. Yang, Mengdi Wang:
Reinforcement Leaning in Feature Space: Matrix Bandit, Kernels, and Regret Bound. CoRR abs/1905.10389 (2019) - [i23]Zeyu Jia, Lin F. Yang, Mengdi Wang:
Feature-Based Q-Learning for Two-Player Stochastic Games. CoRR abs/1906.00423 (2019) - [i22]Alekh Agarwal, Sham M. Kakade, Lin F. Yang:
On the Optimality of Sparse Model-Based Planning for Markov Decision Processes. CoRR abs/1906.03804 (2019) - [i21]Aaron Sidford, Mengdi Wang, Lin F. Yang, Yinyu Ye:
Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity. CoRR abs/1908.11071 (2019) - [i20]Zhao Song, Ruosong Wang, Lin F. Yang, Hongyang Zhang, Peilin Zhong:
Efficient Symmetric Norm Regression via Linear Sketching. CoRR abs/1910.01788 (2019) - [i19]Simon S. Du, Sham M. Kakade, Ruosong Wang, Lin F. Yang:
Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning? CoRR abs/1910.03016 (2019) - [i18]Simon S. Du, Ruosong Wang, Mengdi Wang, Lin F. Yang:
Continuous Control with Contexts, Provably. CoRR abs/1910.13614 (2019) - [i17]Fei Feng, Wotao Yin, Lin F. Yang:
Does Knowledge Transfer Always Help to Learn a Better Policy? CoRR abs/1912.02986 (2019) - 2018
- [j2]Vladimir Braverman, Zaoxing Liu
, Tejasvam Singh, N. V. Vinodchandran, Lin F. Yang
:
New Bounds for the CLIQUE-GAP Problem Using Graph Decomposition Theory. Algorithmica 80(2): 652-667 (2018) - [j1]Nikita Ivkin, Zaoxing Liu
, Lin F. Yang
, S. S. Kumar, Gerard Lemson, Mark Neyrinck
, Alexander S. Szalay
, Vladimir Braverman, Tamas Budavari
:
Scalable streaming tools for analyzing N-body simulations: Finding halos and investigating excursion sets in one pass. Astron. Comput. 23: 166-179 (2018) - [c13]Avrim Blum, Vladimir Braverman, Ananya Kumar, Harry Lang, Lin F. Yang
:
Approximate Convex Hull of Data Streams. ICALP 2018: 21:1-21:13 - [c12]Vladimir Braverman, Emanuele Viola, David P. Woodruff, Lin F. Yang
:
Revisiting Frequency Moment Estimation in Random Order Streams. ICALP 2018: 25:1-25:14 - [c11]Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer, Yi Li, David P. Woodruff, Lin F. Yang
:
Matrix Norms in Data Streams: Faster, Multi-Pass and Row-Order. ICML 2018: 648-657 - [c10]Minshuo Chen, Lin Yang
, Mengdi Wang, Tuo Zhao:
Dimensionality Reduction for Stationary Time Series via Stochastic Nonconvex Optimization. NeurIPS 2018: 3500-3510 - [c9]Lin F. Yang
, Raman Arora, Vladimir Braverman, Tuo Zhao:
The Physical Systems Behind Optimization Algorithms. NeurIPS 2018: 4377-4386 - [c8]Aaron Sidford, Mengdi Wang, Xian Wu, Lin Yang
, Yinyu Ye:
Near-Optimal Time and Sample Complexities for Solving Markov Decision Processes with a Generative Model. NeurIPS 2018: 5192-5202 - [i16]Zhao Song, Lin F. Yang, Peilin Zhong:
Sensitivity Sampling Over Dynamic Geometric Data Streams with Applications to k-Clustering. CoRR abs/1802.00459 (2018) - [i15]Vladimir Braverman, Emanuele Viola, David P. Woodruff, Lin F. Yang:
Revisiting Frequency Moment Estimation in Random Order Streams. CoRR abs/1803.02270 (2018) - [i14]Minshuo Chen, Lin Yang, Mengdi Wang, Tuo Zhao:
Dimensionality Reduction for Stationary Time Series via Stochastic Nonconvex Optimization. CoRR abs/1803.02312 (2018) - [i13]Zhehui Chen, Xingguo Li, Lin F. Yang, Jarvis D. Haupt, Tuo Zhao:
On Landscape of Lagrangian Functions and Stochastic Search for Constrained Nonconvex Optimization. CoRR abs/1806.05151 (2018) - [i12]Vladimir Braverman, Robert Krauthgamer, Lin F. Yang:
Universal Streaming of Subset Norms. CoRR abs/1812.00241 (2018) - [i11]Yibo Lin, Zhao Song, Lin F. Yang:
Towards a Theoretical Understanding of Hashing-Based Neural Nets. CoRR abs/1812.10244 (2018) - 2017
- [c7]Vladimir Braverman, Gereon Frahling, Harry Lang, Christian Sohler
, Lin F. Yang
:
Clustering High Dimensional Dynamic Data Streams. ICML 2017: 576-585 - [c6]Zhehui Chen, Lin F. Yang
, Chris Junchi Li, Tuo Zhao:
Online Partial Least Square Optimization: Dropping Convexity for Better Efficiency and Scalability. ICML 2017: 777-786 - [c5]Jaroslaw Blasiok, Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer
, Lin F. Yang
:
Streaming symmetric norms via measure concentration. STOC 2017: 716-729 - [i10]Lin F. Yang, Vladimir Braverman, Tuo Zhao, Mengdi Wang:
Dynamic Factorization and Partition of Complex Networks. CoRR abs/1705.07881 (2017) - [i9]Vladimir Braverman, Gereon Frahling, Harry Lang, Christian Sohler, Lin F. Yang:
Clustering High Dimensional Dynamic Data Streams. CoRR abs/1706.03887 (2017) - [i8]Xingguo Li, Lin F. Yang, Jason Ge, Jarvis D. Haupt, Tong Zhang, Tuo Zhao:
On Quadratic Convergence of DC Proximal Newton Algorithm for Nonconvex Sparse Learning in High Dimensions. CoRR abs/1706.06066 (2017) - [i7]