


Остановите войну!
for scientists:
Sergey Levine
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2022
- [i334]Jathushan Rajasegaran, Chelsea Finn, Sergey Levine:
Fully Online Meta-Learning Without Task Boundaries. CoRR abs/2202.00263 (2022) - [i333]Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Chelsea Finn, Sergey Levine:
How to Leverage Unlabeled Data in Offline Reinforcement Learning. CoRR abs/2202.01741 (2022) - [i332]Eric Jang, Alex Irpan, Mohi Khansari, Daniel Kappler, Frederik Ebert, Corey Lynch, Sergey Levine, Chelsea Finn:
BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning. CoRR abs/2202.02005 (2022) - [i331]Sean Chen, Jensen Gao, Siddharth Reddy, Glen Berseth, Anca D. Dragan, Sergey Levine:
ASHA: Assistive Teleoperation via Human-in-the-Loop Reinforcement Learning. CoRR abs/2202.02465 (2022) - [i330]Brandon Trabucco, Xinyang Geng, Aviral Kumar, Sergey Levine:
Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization. CoRR abs/2202.08450 (2022) - [i329]Dhruv Shah, Sergey Levine:
ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints. CoRR abs/2202.11271 (2022) - [i328]Jensen Gao, Siddharth Reddy, Glen Berseth, Nicholas Hardy, Nikhilesh Natraj, Karunesh Ganguly, Anca D. Dragan, Sergey Levine:
X2T: Training an X-to-Text Typing Interface with Online Learning from User Feedback. CoRR abs/2203.02072 (2022) - [i327]Abhishek Gupta, Corey Lynch, Brandon Kinman, Garrett Peake, Sergey Levine, Karol Hausman:
Demonstration-Bootstrapped Autonomous Practicing via Multi-Task Reinforcement Learning. CoRR abs/2203.15755 (2022) - [i326]Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alexander Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J. Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan:
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances. CoRR abs/2204.01691 (2022) - [i325]Ikechukwu Uchendu, Ted Xiao, Yao Lu, Banghua Zhu, Mengyuan Yan, Joséphine Simon, Matthew Bennice, Chuyuan Fu, Cong Ma, Jiantao Jiao, Sergey Levine, Karol Hausman:
Jump-Start Reinforcement Learning. CoRR abs/2204.02372 (2022) - [i324]Aviral Kumar, Joey Hong, Anikait Singh, Sergey Levine:
When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning? CoRR abs/2204.05618 (2022) - [i323]Siddharth Verma, Justin Fu, Mengjiao Yang, Sergey Levine:
CHAI: A CHatbot AI for Task-Oriented Dialogue with Offline Reinforcement Learning. CoRR abs/2204.08426 (2022) - [i322]Homanga Bharadhwaj, Mohammad Babaeizadeh, Dumitru Erhan, Sergey Levine:
INFOrmation Prioritization through EmPOWERment in Visual Model-Based RL. CoRR abs/2204.08585 (2022) - [i321]Charlie Snell, Mengjiao Yang, Justin Fu, Yi Su, Sergey Levine:
Context-Aware Language Modeling for Goal-Oriented Dialogue Systems. CoRR abs/2204.10198 (2022) - [i320]Philippe Hansen-Estruch, Amy Zhang, Ashvin Nair, Patrick Yin, Sergey Levine:
Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning. CoRR abs/2204.13060 (2022) - [i319]Rowan McAllister, Blake Wulfe, Jean Mercat, Logan Ellis, Sergey Levine, Adrien Gaidon:
Control-Aware Prediction Objectives for Autonomous Driving. CoRR abs/2204.13319 (2022) - [i318]Xue Bin Peng, Yunrong Guo, Lina Halper, Sergey Levine, Sanja Fidler:
ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters. CoRR abs/2205.01906 (2022) - 2021
- [j19]Julian Ibarz
, Jie Tan, Chelsea Finn
, Mrinal Kalakrishnan
, Peter Pastor, Sergey Levine:
How to train your robot with deep reinforcement learning: lessons we have learned. Int. J. Robotics Res. 40(4-5) (2021) - [j18]Gregory Kahn
, Pieter Abbeel, Sergey Levine:
BADGR: An Autonomous Self-Supervised Learning-Based Navigation System. IEEE Robotics Autom. Lett. 6(2): 1312-1319 (2021) - [j17]Suneel Belkhale
, Rachel Li, Gregory Kahn
, Rowan McAllister
, Roberto Calandra
, Sergey Levine:
Model-Based Meta-Reinforcement Learning for Flight With Suspended Payloads. IEEE Robotics Autom. Lett. 6(2): 1471-1478 (2021) - [j16]Gregory Kahn
, Pieter Abbeel, Sergey Levine
:
LaND: Learning to Navigate From Disengagements. IEEE Robotics Autom. Lett. 6(2): 1872-1879 (2021) - [j15]Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, Angjoo Kanazawa:
AMP: adversarial motion priors for stylized physics-based character control. ACM Trans. Graph. 40(4): 144:1-144:20 (2021) - [c276]Charles Sun, Jedrzej Orbik, Coline Manon Devin, Brian H. Yang, Abhishek Gupta, Glen Berseth, Sergey Levine:
Fully Autonomous Real-World Reinforcement Learning with Applications to Mobile Manipulation. CoRL 2021: 308-319 - [c275]Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, Sergey Levine:
A Workflow for Offline Model-Free Robotic Reinforcement Learning. CoRL 2021: 417-428 - [c274]Dmitry Kalashnikov, Jake Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Levine, Karol Hausman:
Scaling Up Multi-Task Robotic Reinforcement Learning. CoRL 2021: 557-575 - [c273]Dhruv Shah, Benjamin Eysenbach, Nicholas Rhinehart, Sergey Levine:
Rapid Exploration for Open-World Navigation with Latent Goal Models. CoRL 2021: 674-684 - [c272]Eric Jang, Alex Irpan, Mohi Khansari, Daniel Kappler, Frederik Ebert, Corey Lynch, Sergey Levine, Chelsea Finn:
BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning. CoRL 2021: 991-1002 - [c271]Yao Lu, Karol Hausman, Yevgen Chebotar, Mengyuan Yan, Eric Jang, Alexander Herzog, Ted Xiao, Alex Irpan, Mohi Khansari, Dmitry Kalashnikov, Sergey Levine:
AW-Opt: Learning Robotic Skills with Imitation andReinforcement at Scale. CoRL 2021: 1078-1088 - [c270]Katie Kang, Gregory Kahn, Sergey Levine:
Hierarchically Integrated Models: Learning to Navigate from Heterogeneous Robots. CoRL 2021: 1316-1325 - [c269]Sergey Levine:
Understanding the World Through Action. CoRL 2021: 1752-1757 - [c268]Amy Zhang, Rowan Thomas McAllister, Roberto Calandra, Yarin Gal, Sergey Levine:
Learning Invariant Representations for Reinforcement Learning without Reconstruction. ICLR 2021 - [c267]Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, Ofir Nachum:
OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning. ICLR 2021 - [c266]Glen Berseth, Daniel Geng, Coline Manon Devin, Nicholas Rhinehart, Chelsea Finn, Dinesh Jayaraman, Sergey Levine:
SMiRL: Surprise Minimizing Reinforcement Learning in Unstable Environments. ICLR 2021 - [c265]Homanga Bharadhwaj, Aviral Kumar, Nicholas Rhinehart, Sergey Levine, Florian Shkurti, Animesh Garg:
Conservative Safety Critics for Exploration. ICLR 2021 - [c264]John D. Co-Reyes, Yingjie Miao, Daiyi Peng, Esteban Real, Quoc V. Le, Sergey Levine, Honglak Lee, Aleksandra Faust:
Evolving Reinforcement Learning Algorithms. ICLR 2021 - [c263]Benjamin Eysenbach, Shreyas Chaudhari, Swapnil Asawa, Sergey Levine, Ruslan Salakhutdinov:
Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers. ICLR 2021 - [c262]Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine:
C-Learning: Learning to Achieve Goals via Recursive Classification. ICLR 2021 - [c261]Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Ziyu Wang, Alexander Novikov, Mengjiao Yang, Michael R. Zhang, Yutian Chen, Aviral Kumar, Cosmin Paduraru, Sergey Levine, Thomas Paine:
Benchmarks for Deep Off-Policy Evaluation. ICLR 2021 - [c260]Justin Fu, Sergey Levine:
Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation. ICLR 2021 - [c259]Jensen Gao, Siddharth Reddy, Glen Berseth, Nicholas Hardy, Nikhilesh Natraj, Karunesh Ganguly, Anca D. Dragan, Sergey Levine:
X2T: Training an X-to-Text Typing Interface with Online Learning from User Feedback. ICLR 2021 - [c258]Dibya Ghosh, Abhishek Gupta, Ashwin Reddy, Justin Fu, Coline Manon Devin, Benjamin Eysenbach, Sergey Levine:
Learning to Reach Goals via Iterated Supervised Learning. ICLR 2021 - [c257]Anirudh Goyal, Alex Lamb, Phanideep Gampa, Philippe Beaudoin, Charles Blundell, Sergey Levine, Yoshua Bengio, Michael Curtis Mozer:
Factorizing Declarative and Procedural Knowledge in Structured, Dynamical Environments. ICLR 2021 - [c256]Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, Bernhard Schölkopf:
Recurrent Independent Mechanisms. ICLR 2021 - [c255]Aviral Kumar, Rishabh Agarwal, Dibya Ghosh, Sergey Levine:
Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning. ICLR 2021 - [c254]Avi Singh, Huihan Liu, Gaoyue Zhou, Albert Yu, Nicholas Rhinehart, Sergey Levine:
Parrot: Data-Driven Behavioral Priors for Reinforcement Learning. ICLR 2021 - [c253]Stephen Tian, Suraj Nair, Frederik Ebert, Sudeep Dasari, Benjamin Eysenbach, Chelsea Finn, Sergey Levine:
Model-Based Visual Planning with Self-Supervised Functional Distances. ICLR 2021 - [c252]Michael Chang, Sidhant Kaushik, Sergey Levine, Tom Griffiths:
Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment. ICML 2021: 1452-1462 - [c251]Yevgen Chebotar, Karol Hausman, Yao Lu, Ted Xiao, Dmitry Kalashnikov, Jacob Varley, Alex Irpan, Benjamin Eysenbach, Ryan Julian, Chelsea Finn, Sergey Levine:
Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills. ICML 2021: 1518-1528 - [c250]Jongwook Choi, Archit Sharma, Honglak Lee, Sergey Levine, Shixiang Shane Gu:
Variational Empowerment as Representation Learning for Goal-Conditioned Reinforcement Learning. ICML 2021: 1953-1963 - [c249]Angelos Filos, Clare Lyle, Yarin Gal, Sergey Levine, Natasha Jaques, Gregory Farquhar:
PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning. ICML 2021: 3305-3317 - [c248]Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, Shixiang Shane Gu:
Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning. ICML 2021: 3541-3552 - [c247]Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton Earnshaw, Imran Haque, Sara M. Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, Percy Liang:
WILDS: A Benchmark of in-the-Wild Distribution Shifts. ICML 2021: 5637-5664 - [c246]Kevin Li, Abhishek Gupta, Ashwin Reddy, Vitchyr H. Pong, Aurick Zhou, Justin Yu, Sergey Levine:
MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning. ICML 2021: 6346-6356 - [c245]Eric Mitchell, Rafael Rafailov, Xue Bin Peng, Sergey Levine, Chelsea Finn:
Offline Meta-Reinforcement Learning with Advantage Weighting. ICML 2021: 7780-7791 - [c244]Kamal Ndousse, Douglas Eck, Sergey Levine, Natasha Jaques:
Emergent Social Learning via Multi-agent Reinforcement Learning. ICML 2021: 7991-8004 - [c243]Oleh Rybkin, Kostas Daniilidis, Sergey Levine:
Simple and Effective VAE Training with Calibrated Decoders. ICML 2021: 9179-9189 - [c242]Oleh Rybkin, Chuning Zhu, Anusha Nagabandi, Kostas Daniilidis, Igor Mordatch, Sergey Levine:
Model-Based Reinforcement Learning via Latent-Space Collocation. ICML 2021: 9190-9201 - [c241]Brandon Trabucco, Aviral Kumar, Xinyang Geng, Sergey Levine:
Conservative Objective Models for Effective Offline Model-Based Optimization. ICML 2021: 10358-10368 - [c240]Aurick Zhou, Sergey Levine:
Amortized Conditional Normalized Maximum Likelihood: Reliable Out of Distribution Uncertainty Estimation. ICML 2021: 12803-12812 - [c239]Zhongyu Li, Xuxin Cheng, Xue Bin Peng, Pieter Abbeel, Sergey Levine, Glen Berseth, Koushil Sreenath:
Reinforcement Learning for Robust Parameterized Locomotion Control of Bipedal Robots. ICRA 2021: 2811-2817 - [c238]Yifeng Jiang, Tingnan Zhang, Daniel Ho, Yunfei Bai, C. Karen Liu, Sergey Levine, Jie Tan:
SimGAN: Hybrid Simulator Identification for Domain Adaptation via Adversarial Reinforcement Learning. ICRA 2021: 2884-2890 - [c237]Soroush Nasiriany, Vitchyr H. Pong, Ashvin Nair, Alexander Khazatsky, Glen Berseth, Sergey Levine:
DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies. ICRA 2021: 6635-6641 - [c236]Abhishek Gupta, Justin Yu, Tony Z. Zhao, Vikash Kumar, Aaron Rovinsky, Kelvin Xu, Thomas Devlin, Sergey Levine:
Reset-Free Reinforcement Learning via Multi-Task Learning: Learning Dexterous Manipulation Behaviors without Human Intervention. ICRA 2021: 6664-6671 - [c235]Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, Sergey Levine:
ViNG: Learning Open-World Navigation with Visual Goals. ICRA 2021: 13215-13222 - [c234]Nicholas Rhinehart, Jeff He, Charles Packer, Matthew A. Wright, Rowan McAllister, Joseph E. Gonzalez, Sergey Levine:
Contingencies from Observations: Tractable Contingency Planning with Learned Behavior Models. ICRA 2021: 13663-13669 - [c233]Alexander Khazatsky, Ashvin Nair, Daniel Jing, Sergey Levine:
What Can I Do Here? Learning New Skills by Imagining Visual Affordances. ICRA 2021: 14291-14297 - [c232]Aurick Zhou, Sergey Levine:
Bayesian Adaptation for Covariate Shift. NeurIPS 2021: 914-927 - [c231]Michael Janner, Qiyang Li, Sergey Levine:
Offline Reinforcement Learning as One Big Sequence Modeling Problem. NeurIPS 2021: 1273-1286 - [c230]Nicholas Rhinehart, Jenny Wang, Glen Berseth, John D. Co-Reyes, Danijar Hafner, Chelsea Finn, Sergey Levine:
Information is Power: Intrinsic Control via Information Capture. NeurIPS 2021: 10745-10758 - [c229]Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn:
Conservative Data Sharing for Multi-Task Offline Reinforcement Learning. NeurIPS 2021: 11501-11516 - [c228]Ben Eysenbach, Sergey Levine, Ruslan Salakhutdinov:
Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification. NeurIPS 2021: 11541-11552 - [c227]Tim G. J. Rudner, Vitchyr Pong, Rowan McAllister, Yarin Gal, Sergey Levine:
Outcome-Driven Reinforcement Learning via Variational Inference. NeurIPS 2021: 13045-13058 - [c226]Archit Sharma, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn:
Autonomous Reinforcement Learning via Subgoal Curricula. NeurIPS 2021: 18474-18486 - [c225]Marvin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, Chelsea Finn:
Adaptive Risk Minimization: Learning to Adapt to Domain Shift. NeurIPS 2021: 23664-23678 - [c224]Dibya Ghosh, Jad Rahme, Aviral Kumar, Amy Zhang, Ryan P. Adams, Sergey Levine:
Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability. NeurIPS 2021: 25502-25515 - [c223]Kate Rakelly, Abhishek Gupta, Carlos Florensa, Sergey Levine:
Which Mutual-Information Representation Learning Objectives are Sufficient for Control? NeurIPS 2021: 26345-26357 - [c222]Sid Reddy, Anca Dragan, Sergey Levine:
Pragmatic Image Compression for Human-in-the-Loop Decision-Making. NeurIPS 2021: 26499-26510 - [c221]Ben Eysenbach, Ruslan Salakhutdinov, Sergey Levine:
Robust Predictable Control. NeurIPS 2021: 27813-27825 - [c220]Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, Chelsea Finn:
COMBO: Conservative Offline Model-Based Policy Optimization. NeurIPS 2021: 28954-28967 - [i317]John D. Co-Reyes, Yingjie Miao, Daiyi Peng, Esteban Real, Sergey Levine, Quoc V. Le, Honglak Lee, Aleksandra Faust:
Evolving Reinforcement Learning Algorithms. CoRR abs/2101.03958 (2021) - [i316]Yifeng Jiang, Tingnan Zhang, Daniel Ho, Yunfei Bai, C. Karen Liu, Sergey Levine, Jie Tan:
SimGAN: Hybrid Simulator Identification for Domain Adaptation via Adversarial Reinforcement Learning. CoRR abs/2101.06005 (2021) - [i315]Julian Ibarz, Jie Tan, Chelsea Finn, Mrinal Kalakrishnan, Peter Pastor, Sergey Levine:
How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned. CoRR abs/2102.02915 (2021) - [i314]Justin Fu, Sergey Levine:
Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation. CoRR abs/2102.07970 (2021) - [i313]Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, Chelsea Finn:
COMBO: Conservative Offline Model-Based Policy Optimization. CoRR abs/2102.08363 (2021) - [i312]Angelos Filos, Clare Lyle, Yarin Gal, Sergey Levine, Natasha Jaques, Gregory Farquhar:
PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning. CoRR abs/2102.12560 (2021) - [i311]Benjamin Eysenbach, Sergey Levine:
Maximum Entropy RL (Provably) Solves Some Robust RL Problems. CoRR abs/2103.06257 (2021) - [i310]Benjamin Eysenbach, Sergey Levine, Ruslan Salakhutdinov:
Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification. CoRR abs/2103.12656 (2021) - [i309]Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, Shixiang Shane Gu:
Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning. CoRR abs/2103.12726 (2021) - [i308]Zhongyu Li, Xuxin Cheng, Xue Bin Peng, Pieter Abbeel, Sergey Levine, Glen Berseth, Koushil Sreenath:
Reinforcement Learning for Robust Parameterized Locomotion Control of Bipedal Robots. CoRR abs/2103.14295 (2021) - [i307]Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Ziyu Wang, Alexander Novikov, Mengjiao Yang, Michael R. Zhang, Yutian Chen, Aviral Kumar, Cosmin Paduraru, Sergey Levine, Tom Le Paine:
Benchmarks for Deep Off-Policy Evaluation. CoRR abs/2103.16596 (2021) - [i306]Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, Angjoo Kanazawa:
AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control. CoRR abs/2104.02180 (2021) - [i305]Dhruv Shah, Benjamin Eysenbach, Nicholas Rhinehart, Sergey Levine:
RECON: Rapid Exploration for Open-World Navigation with Latent Goal Models. CoRR abs/2104.05859 (2021) - [i304]Yevgen Chebotar, Karol Hausman, Yao Lu, Ted Xiao, Dmitry Kalashnikov, Jake Varley, Alex Irpan, Benjamin Eysenbach, Ryan Julian, Chelsea Finn, Sergey Levine:
Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills. CoRR abs/2104.07749 (2021) - [i303]Dmitry Kalashnikov, Jacob Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Levine, Karol Hausman:
MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale. CoRR abs/2104.08212 (2021) - [i302]Tim G. J. Rudner
, Vitchyr H. Pong, Rowan McAllister, Yarin Gal, Sergey Levine:
Outcome-Driven Reinforcement Learning via Variational Inference. CoRR abs/2104.10190 (2021) - [i301]Nicholas Rhinehart, Jeff He, Charles Packer, Matthew A. Wright, Rowan McAllister, Joseph E. Gonzalez, Sergey Levine:
Contingencies from Observations: Tractable Contingency Planning with Learned Behavior Models. CoRR abs/2104.10558 (2021) - [i300]Abhishek Gupta, Justin Yu, Tony Z. Zhao, Vikash Kumar, Aaron Rovinsky, Kelvin Xu, Thomas Devlin, Sergey Levine:
Reset-Free Reinforcement Learning via Multi-Task Learning: Learning Dexterous Manipulation Behaviors without Human Intervention. CoRR abs/2104.11203 (2021) - [i299]Soroush Nasiriany, Vitchyr H. Pong, Ashvin Nair, Alexander Khazatsky, Glen Berseth, Sergey Levine:
DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies. CoRR abs/2104.11707 (2021) - [i298]Alexander Khazatsky, Ashvin Nair, Daniel Jing, Sergey Levine:
What Can I Do Here? Learning New Skills by Imagining Visual Affordances. CoRR abs/2106.00671 (2021) - [i297]Jongwook Choi, Archit Sharma, Honglak Lee, Sergey Levine, Shixiang Shane Gu:
Variational Empowerment as Representation Learning for Goal-Based Reinforcement Learning. CoRR abs/2106.01404 (2021) - [i296]Michael Janner, Qiyang Li, Sergey Levine:
Reinforcement Learning as One Big Sequence Modeling Problem. CoRR abs/2106.02039 (2021) - [i295]Kate Rakelly, Abhishek Gupta, Carlos Florensa, Sergey Levine:
Which Mutual-Information Representation Learning Objectives are Sufficient for Control? CoRR abs/2106.07278 (2021) - [i294]Mohammad Babaeizadeh, Mohammad Taghi Saffar, Suraj Nair, Sergey Levine, Chelsea Finn, Dumitru Erhan:
FitVid: Overfitting in Pixel-Level Video Prediction. CoRR abs/2106.13195 (2021) - [i293]Oleh Rybkin, Chuning Zhu, Anusha Nagabandi, Kostas Daniilidis, Igor Mordatch, Sergey Levine:
Model-Based Reinforcement Learning via Latent-Space Collocation. CoRR abs/2106.13229 (2021) - [i292]Katie Kang, Gregory Kahn, Sergey Levine:
Multi-Robot Deep Reinforcement Learning for Mobile Navigation. CoRR abs/2106.13280 (2021) - [i291]Michael Chang, Sidhant Kaushik, Sergey Levine, Thomas L. Griffiths:
Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment. CoRR abs/2106.14993 (2021) - [i290]Vitchyr H. Pong, Ashvin Nair, Laura Smith, Catherine Huang, Sergey Levine:
Offline Meta-Reinforcement Learning with Online Self-Supervision. CoRR abs/2107.03974 (2021) - [i289]Dibya Ghosh, Jad Rahme, Aviral Kumar, Amy Zhang, Ryan P. Adams, Sergey Levine:
Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability. CoRR abs/2107.06277 (2021) - [i288]Brandon Trabucco, Aviral Kumar, Xinyang Geng, Sergey Levine:
Conservative Objective Models for Effective Offline Model-Based Optimization. CoRR abs/2107.06882 (2021) - [i287]Kevin Li, Abhishek Gupta, Ashwin Reddy, Vitchyr Pong, Aurick Zhou, Justin Yu, Sergey Levine:
MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning. CoRR abs/2107.07184 (2021) - [i286]Arnaud Fickinger, Natasha Jaques, Samyak Parajuli, Michael Chang, Nicholas Rhinehart, Glen Berseth, Stuart Russell, Sergey Levine:
Explore and Control with Adversarial Surprise. CoRR abs/2107.07394 (2021) - [i285]Archit Sharma, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn:
Persistent Reinforcement Learning via Subgoal Curricula. CoRR abs/2107.12931 (2021) - [i284]Charles Sun, Jedrzej Orbik, Coline Devin, Brian H. Yang, Abhishek Gupta, Glen Berseth, Sergey Levine:
ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only Onboard Sensors. CoRR abs/2107.13545 (2021) - [i283]Siddharth Reddy, Anca D. Dragan, Sergey Levine:
Pragmatic Image Compression for Human-in-the-Loop Decision-Making. CoRR abs/2108.04219 (2021) - [i282]Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine:
Robust Predictable Control. CoRR abs/2109.03214 (2021) - [i281]Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn:
Conservative Data Sharing for Multi-Task Offline Reinforcement Learning. CoRR abs/2109.08128 (2021) - [i280]Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, Sergey Levine:
A Workflow for Offline Model-Free Robotic Reinforcement Learning. CoRR abs/2109.10813 (2021) - [i279]Aurick Zhou, Sergey Levine:
Training on Test Data with Bayesian Adaptation for Covariate Shift. CoRR abs/2109.12746 (2021) - [i278]Frederik Ebert, Yanlai Yang, Karl Schmeckpeper, Bernadette Bucher, Georgios Georgakis, Kostas Daniilidis, Chelsea Finn, Sergey Levine:
Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain Datasets. CoRR abs/2109.13396 (2021) - [i277]Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine:
The Information Geometry of Unsupervised Reinforcement Learning. CoRR abs/2110.02719 (2021) - [i276]Benjamin Eysenbach, Alexander Khazatsky, Sergey Levine, Ruslan Salakhutdinov:
Mismatched No More: Joint Model-Policy Optimization for Model-Based RL. CoRR abs/2110.02758 (2021) - [i275]Tony Z. Zhao, Jianlan Luo, Oleg Sushkov, Rugile Pevceviciute, Nicolas Heess, Jonathan Scholz, Stefan Schaal, Sergey Levine:
Offline Meta-Reinforcement Learning for Industrial Insertion. CoRR abs/2110.04276 (2021) - [i274]Laura Smith, J. Chase Kew, Xue Bin Peng, Sehoon Ha, Jie Tan, Sergey Levine:
Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World. CoRR abs/2110.05457 (2021) - [i273]Ilya Kostrikov, Ashvin Nair, Sergey Levine:
Offline Reinforcement Learning with Implicit Q-Learning. CoRR abs/2110.06169 (2021) - [i272]