Остановите войну!
for scientists:
default search action
Ching-An Cheng
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c44]Sinong Geng, Aldo Pacchiano, Andrey Kolobov, Ching-An Cheng:
Improving Offline RL by Blending Heuristics. ICLR 2024 - [c43]Ruijie Zheng, Ching-An Cheng, Hal Daumé III, Furong Huang, Andrey Kolobov:
PRISE: LLM-Style Sequence Compression for Learning Temporal Action Abstractions in Control. ICML 2024 - [i44]Ruijie Zheng, Ching-An Cheng, Hal Daumé III, Furong Huang, Andrey Kolobov:
PRISE: Learning Temporal Action Abstractions as a Sequence Compression Problem. CoRR abs/2402.10450 (2024) - [i43]Corby Rosset, Ching-An Cheng, Arindam Mitra, Michael Santacroce, Ahmed Awadallah, Tengyang Xie:
Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences. CoRR abs/2404.03715 (2024) - [i42]Allen Nie, Ching-An Cheng, Andrey Kolobov, Adith Swaminathan:
The Importance of Directional Feedback for LLM-based Optimizers. CoRR abs/2405.16434 (2024) - [i41]Ching-An Cheng, Allen Nie, Adith Swaminathan:
Trace is the New AutoDiff - Unlocking Efficient Optimization of Computational Workflows. CoRR abs/2406.16218 (2024) - 2023
- [c42]Garrett Thomas, Ching-An Cheng, Ricky Loynd, Felipe Vieira Frujeri, Vibhav Vineet, Mihai Jalobeanu, Andrey Kolobov:
PLEX: Making the Most of the Available Data for Robotic Manipulation Pretraining. CoRL 2023: 2624-2641 - [c41]Vivek Myers, Andre Wang He, Kuan Fang, Homer Rich Walke, Philippe Hansen-Estruch, Ching-An Cheng, Mihai Jalobeanu, Andrey Kolobov, Anca D. Dragan, Sergey Levine:
Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control. CoRL 2023: 3894-3908 - [c40]Sanae Amani, Lin Yang, Ching-An Cheng:
Provably Efficient Lifelong Reinforcement Learning with Linear Representation. ICLR 2023 - [c39]Anqi Li, Byron Boots, Ching-An Cheng:
MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations. ICML 2023: 19360-19384 - [c38]Hoai-An Nguyen, Ching-An Cheng:
Provable Reset-free Reinforcement Learning by No-Regret Reduction. ICML 2023: 25939-25955 - [c37]Sean R. Sinclair, Felipe Vieira Frujeri, Ching-An Cheng, Luke Marshall, Hugo de Oliveira Barbalho, Jingling Li, Jennifer Neville, Ishai Menache, Adith Swaminathan:
Hindsight Learning for MDPs with Exogenous Inputs. ICML 2023: 31877-31914 - [c36]Mohak Bhardwaj, Tengyang Xie, Byron Boots, Nan Jiang, Ching-An Cheng:
Adversarial Model for Offline Reinforcement Learning. NeurIPS 2023 - [c35]Anqi Li, Dipendra Misra, Andrey Kolobov, Ching-An Cheng:
Survival Instinct in Offline Reinforcement Learning. NeurIPS 2023 - [i40]Hoai-An Nguyen, Ching-An Cheng:
Provable Reset-free Reinforcement Learning by No-Regret Reduction. CoRR abs/2301.02389 (2023) - [i39]Mohak Bhardwaj, Tengyang Xie, Byron Boots, Nan Jiang, Ching-An Cheng:
Adversarial Model for Offline Reinforcement Learning. CoRR abs/2302.11048 (2023) - [i38]Garrett Thomas, Ching-An Cheng, Ricky Loynd, Vibhav Vineet, Mihai Jalobeanu, Andrey Kolobov:
PLEX: Making the Most of the Available Data for Robotic Manipulation Pretraining. CoRR abs/2303.08789 (2023) - [i37]Anqi Li, Byron Boots, Ching-An Cheng:
MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations. CoRR abs/2303.17156 (2023) - [i36]Sinong Geng, Aldo Pacchiano, Andrey Kolobov, Ching-An Cheng:
Improving Offline RL by Blending Heuristics. CoRR abs/2306.00321 (2023) - [i35]Anqi Li, Dipendra Misra, Andrey Kolobov, Ching-An Cheng:
Survival Instinct in Offline Reinforcement Learning. CoRR abs/2306.03286 (2023) - [i34]Vivek Myers, Andre He, Kuan Fang, Homer Walke, Philippe Hansen-Estruch, Ching-An Cheng, Mihai Jalobeanu, Andrey Kolobov, Anca D. Dragan, Sergey Levine:
Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control. CoRR abs/2307.00117 (2023) - [i33]Huihan Liu, Alice Chen, Yuke Zhu, Adith Swaminathan, Andrey Kolobov, Ching-An Cheng:
Interactive Robot Learning from Verbal Correction. CoRR abs/2310.17555 (2023) - [i32]Ching-An Cheng, Andrey Kolobov, Dipendra Misra, Allen Nie, Adith Swaminathan:
LLF-Bench: Benchmark for Interactive Learning from Language Feedback. CoRR abs/2312.06853 (2023) - 2022
- [c34]Ching-An Cheng, Tengyang Xie, Nan Jiang, Alekh Agarwal:
Adversarially Trained Actor Critic for Offline Reinforcement Learning. ICML 2022: 3852-3878 - [c33]Nolan Wagener, Andrey Kolobov, Felipe Vieira Frujeri, Ricky Loynd, Ching-An Cheng, Matthew J. Hausknecht:
MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control. NeurIPS 2022 - [i31]Ching-An Cheng, Tengyang Xie, Nan Jiang, Alekh Agarwal:
Adversarially Trained Actor Critic for Offline Reinforcement Learning. CoRR abs/2202.02446 (2022) - [i30]Sanae Amani, Lin F. Yang, Ching-An Cheng:
Provably Efficient Lifelong Reinforcement Learning with Linear Function Approximation. CoRR abs/2206.00270 (2022) - [i29]Sean R. Sinclair, Felipe Frujeri, Ching-An Cheng, Adith Swaminathan:
Hindsight Learning for MDPs with Exogenous Inputs. CoRR abs/2207.06272 (2022) - [i28]Nolan Wagener, Andrey Kolobov, Felipe Vieira Frujeri, Ricky Loynd, Ching-An Cheng, Matthew J. Hausknecht:
MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control. CoRR abs/2208.07363 (2022) - [i27]Tengyang Xie, Mohak Bhardwaj, Nan Jiang, Ching-An Cheng:
ARMOR: A Model-based Framework for Improving Arbitrary Baseline Policies with Offline Data. CoRR abs/2211.04538 (2022) - 2021
- [j5]Ching-An Cheng, Mustafa Mukadam, Jan Issac, Stan Birchfield, Dieter Fox, Byron Boots, Nathan D. Ratliff:
RMPflow: A Geometric Framework for Generation of Multitask Motion Policies. IEEE Trans Autom. Sci. Eng. 18(3): 968-987 (2021) - [c32]Andrea Zanette, Ching-An Cheng, Alekh Agarwal:
Cautiously Optimistic Policy Optimization and Exploration with Linear Function Approximation. COLT 2021: 4473-4525 - [c31]Nolan Wagener, Byron Boots, Ching-An Cheng:
Safe Reinforcement Learning Using Advantage-Based Intervention. ICML 2021: 10630-10640 - [c30]Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, Alekh Agarwal:
Bellman-consistent Pessimism for Offline Reinforcement Learning. NeurIPS 2021: 6683-6694 - [c29]Ching-An Cheng, Andrey Kolobov, Adith Swaminathan:
Heuristic-Guided Reinforcement Learning. NeurIPS 2021: 13550-13563 - [c28]Anqi Li, Ching-An Cheng, Muhammad Asif Rana, Man Xie, Karl Van Wyk, Nathan D. Ratliff, Byron Boots:
RMP2: A Structured Composable Policy Class for Robot Learning. Robotics: Science and Systems 2021 - [c27]Xinyan Yan, Byron Boots, Ching-An Cheng:
Explaining fast improvement in online imitation learning. UAI 2021: 1874-1884 - [i26]Anqi Li, Ching-An Cheng, Muhammad Asif Rana, Man Xie, Karl Van Wyk, Nathan D. Ratliff, Byron Boots:
RMP2: A Structured Composable Policy Class for Robot Learning. CoRR abs/2103.05922 (2021) - [i25]Andrea Zanette, Ching-An Cheng, Alekh Agarwal:
Cautiously Optimistic Policy Optimization and Exploration with Linear Function Approximation. CoRR abs/2103.12923 (2021) - [i24]Ching-An Cheng, Andrey Kolobov, Adith Swaminathan:
Heuristic-Guided Reinforcement Learning. CoRR abs/2106.02757 (2021) - [i23]Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, Alekh Agarwal:
Bellman-consistent Pessimism for Offline Reinforcement Learning. CoRR abs/2106.06926 (2021) - [i22]Nolan Wagener, Byron Boots, Ching-An Cheng:
Safe Reinforcement Learning Using Advantage-Based Intervention. CoRR abs/2106.09110 (2021) - 2020
- [j4]Yunpeng Pan, Ching-An Cheng, Kamil Saigol, Keuntaek Lee, Xinyan Yan, Evangelos A. Theodorou, Byron Boots:
Imitation learning for agile autonomous driving. Int. J. Robotics Res. 39(2-3) (2020) - [c26]Ching-An Cheng, Jonathan Lee, Ken Goldberg, Byron Boots:
Online Learning with Continuous Variations: Dynamic Regret and Reductions. AISTATS 2020: 2218-2228 - [c25]Ching-An Cheng, Remi Tachet des Combes, Byron Boots, Geoffrey J. Gordon:
A Reduction from Reinforcement Learning to No-Regret Online Learning. AISTATS 2020: 3514-3524 - [c24]Bruce Wingo, Ching-An Cheng, Muhammad Ali Murtaza, Munzir Zafar, Seth Hutchinson:
Extending Riemmanian Motion Policies to a Class of Underactuated Wheeled-Inverted-Pendulum Robots. ICRA 2020: 3967-3973 - [c23]Ching-An Cheng, Andrey Kolobov, Alekh Agarwal:
Policy Improvement via Imitation of Multiple Oracles. NeurIPS 2020 - [c22]Amir Rahimi, Amirreza Shaban, Ching-An Cheng, Richard Hartley, Byron Boots:
Intra Order-preserving Functions for Calibration of Multi-Class Neural Networks. NeurIPS 2020 - [i21]Amir Rahimi, Amirreza Shaban, Ching-An Cheng, Byron Boots, Richard Hartley:
Intra Order-preserving Functions for Calibration of Multi-Class Neural Networks. CoRR abs/2003.06820 (2020) - [i20]Ching-An Cheng, Andrey Kolobov, Alekh Agarwal:
Policy Improvement from Multiple Experts. CoRR abs/2007.00795 (2020) - [i19]Xinyan Yan, Byron Boots, Ching-An Cheng:
Explaining Fast Improvement in Online Policy Optimization. CoRR abs/2007.02520 (2020) - [i18]Ching-An Cheng, Mustafa Mukadam, Jan Issac, Stan Birchfield, Dieter Fox, Byron Boots, Nathan D. Ratliff:
RMPflow: A Geometric Framework for Generation of Multi-Task Motion Policies. CoRR abs/2007.14256 (2020)
2010 – 2019
- 2019
- [c21]Amirreza Shaban, Ching-An Cheng, Nathan Hatch, Byron Boots:
Truncated Back-propagation for Bilevel Optimization. AISTATS 2019: 1723-1732 - [c20]Ching-An Cheng, Xinyan Yan, Evangelos A. Theodorou, Byron Boots:
Accelerating Imitation Learning with Predictive Models. AISTATS 2019: 3187-3196 - [c19]Anqi Li, Ching-An Cheng, Byron Boots, Magnus Egerstedt:
Stable, Concurrent Controller Composition for Multi-Objective Robotic Tasks. CDC 2019: 1144-1151 - [c18]Mustafa Mukadam, Ching-An Cheng, Dieter Fox, Byron Boots, Nathan D. Ratliff:
Riemannian Motion Policy Fusion through Learnable Lyapunov Function Reshaping. CoRL 2019: 204-219 - [c17]Ching-An Cheng, Xinyan Yan, Byron Boots:
Trajectory-wise Control Variates for Variance Reduction in Policy Gradient Methods. CoRL 2019: 1379-1394 - [c16]Ching-An Cheng, Xinyan Yan, Nathan D. Ratliff, Byron Boots:
Predictor-Corrector Policy Optimization. ICML 2019: 1151-1161 - [c15]Nolan Wagener, Ching-An Cheng, Jacob Sacks, Byron Boots:
An Online Learning Approach to Model Predictive Control. Robotics: Science and Systems 2019 - [i17]Ching-An Cheng, Jonathan Lee, Ken Goldberg, Byron Boots:
Online Learning with Continuous Variations: Dynamic Regret and Reductions. CoRR abs/1902.07286 (2019) - [i16]Nolan Wagener, Ching-An Cheng, Jacob Sacks, Byron Boots:
An Online Learning Approach to Model Predictive Control. CoRR abs/1902.08967 (2019) - [i15]Anqi Li, Ching-An Cheng, Byron Boots, Magnus Egerstedt:
Stable, Concurrent Controller Composition for Multi-Objective Robotic Tasks. CoRR abs/1903.12605 (2019) - [i14]Ching-An Cheng, Xinyan Yan, Byron Boots:
Trajectory-wise Control Variates for Variance Reduction in Policy Gradient Methods. CoRR abs/1908.03263 (2019) - [i13]Mustafa Mukadam, Ching-An Cheng, Dieter Fox, Byron Boots, Nathan D. Ratliff:
Riemannian Motion Policy Fusion through Learnable Lyapunov Function Reshaping. CoRR abs/1910.02646 (2019) - [i12]Ching-An Cheng, Remi Tachet des Combes, Byron Boots, Geoffrey J. Gordon:
A Reduction from Reinforcement Learning to No-Regret Online Learning. CoRR abs/1911.05873 (2019) - [i11]Jonathan Lee, Ching-An Cheng, Ken Goldberg, Byron Boots:
Continuous Online Learning and New Insights to Online Imitation Learning. CoRR abs/1912.01261 (2019) - 2018
- [c14]Ching-An Cheng, Byron Boots:
Convergence of Value Aggregation for Imitation Learning. AISTATS 2018: 1801-1809 - [c13]Jennifer L. Molnar, Ching-An Cheng, Lucas O. Tiziani, Byron Boots, Frank L. Hammond:
Optical Sensing and Control Methods for Soft Pneumatically Actuated Robotic Manipulators. ICRA 2018: 1-8 - [c12]Hugh Salimbeni, Ching-An Cheng, Byron Boots, Marc Peter Deisenroth:
Orthogonally Decoupled Variational Gaussian Processes. NeurIPS 2018: 8725-8734 - [c11]Yunpeng Pan, Ching-An Cheng, Kamil Saigol, Keuntaek Lee, Xinyan Yan, Evangelos A. Theodorou, Byron Boots:
Agile Autonomous Driving using End-to-End Deep Imitation Learning. Robotics: Science and Systems 2018 - [c10]Ching-An Cheng, Xinyan Yan, Nolan Wagener, Byron Boots:
Fast Policy Learning through Imitation and Reinforcement. UAI 2018: 845-855 - [c9]Ching-An Cheng, Mustafa Mukadam, Jan Issac, Stan Birchfield, Dieter Fox, Byron Boots, Nathan D. Ratliff:
RMPflow: A Computational Graph for Automatic Motion Policy Generation. WAFR 2018: 441-457 - [i10]Ching-An Cheng, Byron Boots:
Convergence of Value Aggregation for Imitation Learning. CoRR abs/1801.07292 (2018) - [i9]Ching-An Cheng, Xinyan Yan, Nolan Wagener, Byron Boots:
Fast Policy Learning through Imitation and Reinforcement. CoRR abs/1805.10413 (2018) - [i8]Ching-An Cheng, Xinyan Yan, Evangelos A. Theodorou, Byron Boots:
Model-Based Imitation Learning with Accelerated Convergence. CoRR abs/1806.04642 (2018) - [i7]Hugh Salimbeni, Ching-An Cheng, Byron Boots, Marc Peter Deisenroth:
Orthogonally Decoupled Variational Gaussian Processes. CoRR abs/1809.08820 (2018) - [i6]Ching-An Cheng, Xinyan Yan, Nathan D. Ratliff, Byron Boots:
Predictor-Corrector Policy Optimization. CoRR abs/1810.06509 (2018) - [i5]Amirreza Shaban, Ching-An Cheng, Nathan Hatch, Byron Boots:
Truncated Back-propagation for Bilevel Optimization. CoRR abs/1810.10667 (2018) - [i4]Ching-An Cheng, Mustafa Mukadam, Jan Issac, Stan Birchfield, Dieter Fox, Byron Boots, Nathan D. Ratliff:
RMPflow: A Computational Graph for Automatic Motion Policy Generation. CoRR abs/1811.07049 (2018) - 2017
- [c8]Mustafa Mukadam, Ching-An Cheng, Xinyan Yan, Byron Boots:
Approximately optimal continuous-time motion planning and control via Probabilistic Inference. ICRA 2017: 664-671 - [c7]Ching-An Cheng, Byron Boots:
Variational Inference for Gaussian Process Models with Linear Complexity. NIPS 2017: 5184-5194 - [i3]Mustafa Mukadam, Ching-An Cheng, Xinyan Yan, Byron Boots:
Approximately Optimal Continuous-Time Motion Planning and Control via Probabilistic Inference. CoRR abs/1702.07335 (2017) - [i2]Yunpeng Pan, Ching-An Cheng, Kamil Saigol, Keuntaek Lee, Xinyan Yan, Evangelos A. Theodorou, Byron Boots:
Agile Off-Road Autonomous Driving Using End-to-End Deep Imitation Learning. CoRR abs/1709.07174 (2017) - [i1]Ching-An Cheng, Byron Boots:
Variational Inference for Gaussian Process Models with Linear Complexity. CoRR abs/1711.10127 (2017) - 2016
- [j3]Sheng-Yen Lo, Ching-An Cheng, Han-Pang Huang:
Virtual Impedance Control for Safe Human-Robot Interaction. J. Intell. Robotic Syst. 82(1): 3-19 (2016) - [j2]Ching-An Cheng, Han-Pang Huang, Huan-Kun Hsu, Wei-Zh Lai, Chih-Chun Cheng:
Learning the Inverse Dynamics of Robotic Manipulators in Structured Reproducing Kernel Hilbert Space. IEEE Trans. Cybern. 46(7): 1691-1703 (2016) - [j1]Ching-An Cheng, Han-Pang Huang:
Learn the Lagrangian: A Vector-Valued RKHS Approach to Identifying Lagrangian Systems. IEEE Trans. Cybern. 46(12): 3247-3258 (2016) - [c6]Ching-An Cheng, Byron Boots:
Incremental Variational Sparse Gaussian Process Regression. NIPS 2016: 4403-4411 - 2015
- [c5]Che-Hsuan Chang, Han-Pang Huang, Huan-Kun Hsu, Ching-An Cheng:
Humanoid robot push-recovery strategy based on CMP criterion and angular momentum regulation. AIM 2015: 761-766 - [c4]Ming-Bao Huang, Han-Pang Huang, Chih-Chun Cheng, Ching-An Cheng:
Efficient grasp synthesis and control strategy for robot hand-arm system. CASE 2015: 1256-1257 - 2013
- [c3]Ching-An Cheng, Tzu-Hao Huang, Han-Pang Huang:
Bayesian human intention estimator for exoskeleton system. AIM 2013: 465-470 - [c2]Tzu-Hao Huang, Ching-An Cheng, Han-Pang Huang:
Self-learning assistive exoskeleton with sliding mode admittance control. IROS 2013: 698-703 - 2012
- [c1]Tzu-Hao Huang, Han-Pang Huang, Ching-An Cheng, Jiun-Yih Kuan, Po-Ting Lee, Shih-Yi Huang:
Design of a new hybrid control and knee orthosis for human walking and rehabilitation. IROS 2012: 3653-3658
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-09-04 00:26 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint