


Остановите войну!
for scientists:


default search action
Sergey Levine
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [j21]Shagun Sodhani, Sergey Levine, Amy Zhang:
Improving Generalization with Approximate Factored Value Functions. Trans. Mach. Learn. Res. 2023 (2023) - [i395]Amrith Setlur, Don Kurian Dennis, Benjamin Eysenbach, Aditi Raghunathan, Chelsea Finn, Virginia Smith, Sergey Levine:
Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts. CoRR abs/2302.02931 (2023) - [i394]Philip J. Ball, Laura M. Smith, Ilya Kostrikov, Sergey Levine:
Efficient Online Reinforcement Learning with Offline Data. CoRR abs/2302.02948 (2023) - [i393]Seohong Park, Sergey Levine:
Predictable MDP Abstraction for Unsupervised Model-Based RL. CoRR abs/2302.03921 (2023) - [i392]Annie S. Chen, Yoonho Lee, Amrith Setlur, Sergey Levine, Chelsea Finn:
Project and Probe: Sample-Efficient Domain Adaptation by Interpolating Orthogonal Features. CoRR abs/2302.05441 (2023) - [i391]Zhongyu Li, Xue Bin Peng, Pieter Abbeel, Sergey Levine, Glen Berseth, Koushil Sreenath:
Robust and Versatile Bipedal Jumping Control through Multi-Task Reinforcement Learning. CoRR abs/2302.09450 (2023) - [i390]Wenlong Huang, Fei Xia, Dhruv Shah, Danny Driess, Andy Zeng, Yao Lu, Pete Florence, Igor Mordatch, Sergey Levine, Karol Hausman, Brian Ichter:
Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control. CoRR abs/2303.00855 (2023) - [i389]Joey Hong, Anca D. Dragan, Sergey Levine:
Learning to Influence Human Behavior with Offline Reinforcement Learning. CoRR abs/2303.02265 (2023) - [i388]Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, Pete Florence:
PaLM-E: An Embodied Multimodal Language Model. CoRR abs/2303.03378 (2023) - [i387]Mitsuhiko Nakamoto, Yuexiang Zhai, Anikait Singh, Max Sobol Mark, Yi Ma, Chelsea Finn, Aviral Kumar, Sergey Levine:
Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning. CoRR abs/2303.05479 (2023) - [i386]Manan Tomar, Riashat Islam, Sergey Levine, Philip Bachman:
Ignorance is Bliss: Robust Control via Information Gating. CoRR abs/2303.06121 (2023) - [i385]Michael Chang, Alyssa L. Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang:
Neural Constraint Satisfaction: Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement. CoRR abs/2303.11373 (2023) - [i384]Dibya Ghosh, Chethan Bhateja, Sergey Levine:
Reinforcement Learning from Passive Data via Latent Intentions. CoRR abs/2304.04782 (2023) - [i383]Kyle Stachowicz, Dhruv Shah, Arjun Bhorkar, Ilya Kostrikov, Sergey Levine:
FastRLAP: A System for Learning High-Speed Driving via Deep RL and Autonomous Practicing. CoRR abs/2304.09831 (2023) - [i382]Laura M. Smith, J. Chase Kew, Tianyu Li, Linda Luu, Xue Bin Peng, Sehoon Ha, Jie Tan, Sergey Levine:
Learning and Adapting Agile Locomotion Skills by Transferring Experience. CoRR abs/2304.09834 (2023) - [i381]Qiyang Li, Aviral Kumar, Ilya Kostrikov, Sergey Levine:
Efficient Deep Reinforcement Learning Requires Regulating Overfitting. CoRR abs/2304.10466 (2023) - [i380]Philippe Hansen-Estruch, Ilya Kostrikov, Michael Janner, Jakub Grudzien Kuba, Sergey Levine:
IDQL: Implicit Q-Learning as an Actor-Critic Method with Diffusion Policies. CoRR abs/2304.10573 (2023) - [i379]Tony Z. Zhao, Vikash Kumar, Sergey Levine, Chelsea Finn:
Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware. CoRR abs/2304.13705 (2023) - [i378]Alexander Herzog, Kanishka Rao, Karol Hausman, Yao Lu, Paul Wohlhart, Mengyuan Yan, Jessica Lin, Montserrat Gonzalez Arenas, Ted Xiao, Daniel Kappler, Daniel Ho, Jarek Rettinghouse, Yevgen Chebotar, Kuang-Huei Lee, Keerthana Gopalakrishnan, Ryan Julian, Adrian Li, Chuyuan Kelly Fu, Bob Wei, Sangeetha Ramesh, Khem Holden, Kim Kleiven, David Rendleman, Sean Kirmani, Jeff Bingham, Jonathan Weisz, Ying Xu, Wenlong Lu, Matthew Bennice, Cody Fong, David Do, Jessica Lam, Yunfei Bai, Benjie Holson, Michael Quinlan, Noah Brown, Mrinal Kalakrishnan, Julian Ibarz, Peter Pastor, Sergey Levine:
Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators. CoRR abs/2305.03270 (2023) - [i377]Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, Sergey Levine:
Training Diffusion Models with Reinforcement Learning. CoRR abs/2305.13301 (2023) - 2022
- [j20]Xue Bin Peng, Yunrong Guo, Lina Halper, Sergey Levine, Sanja Fidler:
ASE: large-scale reusable adversarial skill embeddings for physically simulated characters. ACM Trans. Graph. 41(4): 94:1-94:17 (2022) - [c326]Dhruv Shah, Arjun Bhorkar, Hrishit Leen, Ilya Kostrikov, Nicholas Rhinehart, Sergey Levine:
Offline Reinforcement Learning for Visual Navigation. CoRL 2022: 44-54 - [c325]Kuan Fang, Patrick Yin, Ashvin Nair, Homer Walke, Gengchen Yan, Sergey Levine:
Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks. CoRL 2022: 106-117 - [c324]Brian Ichter, Anthony Brohan, Yevgen Chebotar, Chelsea Finn, Karol Hausman, Alexander Herzog, Daniel Ho, Julian Ibarz, Alex Irpan, Eric Jang, Ryan Julian, Dmitry Kalashnikov, Sergey Levine, Yao Lu, Carolina Parada, Kanishka Rao, Pierre Sermanet, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Mengyuan Yan, Noah Brown, Michael Ahn, Omar Cortes, Nicolas Sievers, Clayton Tan, Sichun Xu, Diego Reyes, Jarek Rettinghouse, Jornell Quiambao, Peter Pastor, Linda Luu, Kuang-Huei Lee, Yuheng Kuang, Sally Jesmonth, Nikhil J. Joshi, Kyle Jeffrey, Rosario Jauregui Ruano, Jasmine Hsu, Keerthana Gopalakrishnan, Byron David, Andy Zeng, Chuyuan Kelly Fu:
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances. CoRL 2022: 287-318 - [c323]Dhruv Shah, Blazej Osinski, Brian Ichter, Sergey Levine:
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action. CoRL 2022: 492-504 - [c322]Charles Packer, Nicholas Rhinehart, Rowan Thomas McAllister, Matthew A. Wright, Xin Wang, Jeff He, Sergey Levine, Joseph E. Gonzalez:
Is Anyone There? Learning a Planner Contingent on Perceptual Uncertainty. CoRL 2022: 1607-1617 - [c321]Homer Walke, Jonathan Yang, Albert Yu, Aviral Kumar, Jedrzej Orbik, Avi Singh, Sergey Levine:
Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning. CoRL 2022: 1652-1662 - [c320]Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Tomas Jackson, Noah Brown, Linda Luu, Sergey Levine, Karol Hausman, Brian Ichter:
Inner Monologue: Embodied Reasoning through Planning with Language Models. CoRL 2022: 1769-1782 - [c319]Gilbert Feng, Hongbo Zhang, Zhongyu Li, Xue Bin Peng, Bhuvan Basireddy, Linzhu Yue, Zhitao Song, Lizhi Yang, Yunhui Liu, Koushil Sreenath, Sergey Levine:
GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots. CoRL 2022: 1893-1903 - [c318]Glen Berseth, Zhiwei Zhang, Grace Zhang, Chelsea Finn, Sergey Levine:
CoMPS: Continual Meta Policy Search. ICLR 2022 - [c317]Homanga Bharadhwaj, Mohammad Babaeizadeh, Dumitru Erhan, Sergey Levine:
Information Prioritization through Empowerment in Visual Model-based RL. ICLR 2022 - [c316]Scott Emmons, Benjamin Eysenbach, Ilya Kostrikov, Sergey Levine:
RvS: What is Essential for Offline RL via Supervised Learning? ICLR 2022 - [c315]Benjamin Eysenbach, Sergey Levine:
Maximum Entropy RL (Provably) Solves Some Robust RL Problems. ICLR 2022 - [c314]Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine:
The Information Geometry of Unsupervised Reinforcement Learning. ICLR 2022 - [c313]Ilya Kostrikov, Ashvin Nair, Sergey Levine:
Offline Reinforcement Learning with Implicit Q-Learning. ICLR 2022 - [c312]Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron C. Courville, George Tucker, Sergey Levine:
DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization. ICLR 2022 - [c311]Aviral Kumar, Joey Hong, Anikait Singh, Sergey Levine:
Should I Run Offline Reinforcement Learning or Behavioral Cloning? ICLR 2022 - [c310]Aviral Kumar, Amir Yazdanbakhsh, Milad Hashemi, Kevin Swersky, Sergey Levine:
Data-Driven Offline Optimization for Architecting Hardware Accelerators. ICLR 2022 - [c309]Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo
, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn, Percy Liang:
Extending the WILDS Benchmark for Unsupervised Adaptation. ICLR 2022 - [c308]Dhruv Shah, Peng Xu, Yao Lu, Ted Xiao, Alexander Toshev, Sergey Levine, Brian Ichter:
Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning. ICLR 2022 - [c307]Archit Sharma, Kelvin Xu, Nikhil Sardana, Abhishek Gupta, Karol Hausman, Sergey Levine, Chelsea Finn:
Autonomous Reinforcement Learning: Formalism and Benchmarking. ICLR 2022 - [c306]Mengjiao Yang, Sergey Levine, Ofir Nachum:
TRAIL: Near-Optimal Imitation Learning with Suboptimal Data. ICLR 2022 - [c305]Tianjun Zhang, Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine, Joseph E. Gonzalez:
C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks. ICLR 2022 - [c304]Dibya Ghosh, Anurag Ajay, Pulkit Agrawal, Sergey Levine:
Offline RL Policies Should Be Trained to be Adaptive. ICML 2022: 7513-7530 - [c303]Philippe Hansen-Estruch, Amy Zhang, Ashvin Nair, Patrick Yin, Sergey Levine:
Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning. ICML 2022: 8407-8426 - [c302]Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine:
Planning with Diffusion for Flexible Behavior Synthesis. ICML 2022: 9902-9915 - [c301]Katie Kang, Paula Gradu, Jason J. Choi, Michael Janner, Claire J. Tomlin, Sergey Levine:
Lyapunov Density Models: Constraining Distribution Shift in Learning-Based Control. ICML 2022: 10708-10733 - [c300]Vitchyr H. Pong, Ashvin V. Nair, Laura M. Smith, Catherine Huang, Sergey Levine:
Offline Meta-Reinforcement Learning with Online Self-Supervision. ICML 2022: 17811-17829 - [c299]Brandon Trabucco, Xinyang Geng, Aviral Kumar, Sergey Levine:
Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization. ICML 2022: 21658-21676 - [c298]Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Chelsea Finn, Sergey Levine:
How to Leverage Unlabeled Data in Offline Reinforcement Learning. ICML 2022: 25611-25635 - [c297]Rowan McAllister, Blake Wulfe, Jean Mercat, Logan Ellis, Sergey Levine, Adrien Gaidon:
Control-Aware Prediction Objectives for Autonomous Driving. ICRA 2022: 1-8 - [c296]Laura M. Smith, J. Chase Kew, Xue Bin Peng, Sehoon Ha, Jie Tan, Sergey Levine:
Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World. ICRA 2022: 1593-1599 - [c295]Nitish Dashora, Daniel Shin, Dhruv Shah, Henry A. Leopold, David D. Fan, Ali-Akbar Agha-Mohammadi, Nicholas Rhinehart, Sergey Levine:
Hybrid Imitative Planning with Geometric and Predictive Costs in Off-road Environments. ICRA 2022: 4452-4458 - [c294]Tony Z. Zhao, Jianlan Luo, Oleg Sushkov, Rugile Pevceviciute, Nicolas Heess, Jon Scholz, Stefan Schaal, Sergey Levine:
Offline Meta-Reinforcement Learning for Industrial Insertion. ICRA 2022: 6386-6393 - [c293]Sean Chen, Jensen Gao, Siddharth Reddy, Glen Berseth, Anca D. Dragan, Sergey Levine:
ASHA: Assistive Teleoperation via Human-in-the-Loop Reinforcement Learning. ICRA 2022: 7505-7512 - [c292]Yandong Ji, Zhongyu Li, Yinan Sun, Xue Bin Peng, Sergey Levine, Glen Berseth, Koushil Sreenath:
Hierarchical Reinforcement Learning for Precise Soccer Shooting Skills using a Quadrupedal Robot. IROS 2022: 1479-1486 - [c291]Kuan Fang, Patrick Yin, Ashvin Nair, Sergey Levine:
Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space. IROS 2022: 4076-4083 - [c290]Charlie Snell, Sherry Yang, Justin Fu, Yi Su, Sergey Levine:
Context-Aware Language Modeling for Goal-Oriented Dialogue Systems. NAACL-HLT (Findings) 2022: 2351-2366 - [c289]Siddharth Verma, Justin Fu, Sherry Yang, Sergey Levine:
CHAI: A CHatbot AI for Task-Oriented Dialogue with Offline Reinforcement Learning. NAACL-HLT 2022: 4471-4491 - [c288]Michael Chang, Tom Griffiths, Sergey Levine:
Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation. NeurIPS 2022 - [c287]Abhishek Gupta, Aldo Pacchiano, Yuexiang Zhai, Sham M. Kakade, Sergey Levine:
Unpacking Reward Shaping: Understanding the Benefits of Reward Engineering on Sample Complexity. NeurIPS 2022 - [c286]Anurag Ajay, Abhishek Gupta, Dibya Ghosh, Sergey Levine, Pulkit Agrawal:
Distributionally Adaptive Meta Reinforcement Learning. NeurIPS 2022 - [c285]Annie S. Chen, Archit Sharma, Sergey Levine, Chelsea Finn:
You Only Live Once: Single-Life Reinforcement Learning. NeurIPS 2022 - [c284]Benjamin Eysenbach, Alexander Khazatsky, Sergey Levine, Ruslan Salakhutdinov:
Mismatched No More: Joint Model-Policy Optimization for Model-Based RL. NeurIPS 2022 - [c283]Benjamin Eysenbach, Soumith Udatha, Russ Salakhutdinov, Sergey Levine:
Imitating Past Successes can be Very Suboptimal. NeurIPS 2022 - [c282]Benjamin Eysenbach, Tianjun Zhang, Sergey Levine, Ruslan Salakhutdinov:
Contrastive Learning as Goal-Conditioned Reinforcement Learning. NeurIPS 2022 - [c281]Han Qi, Yi Su, Aviral Kumar, Sergey Levine:
Data-Driven Offline Decision-Making via Invariant Representation Learning. NeurIPS 2022 - [c280]Siddharth Reddy, Sergey Levine, Anca D. Dragan:
First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual Information Maximization. NeurIPS 2022 - [c279]Amrith Setlur, Benjamin Eysenbach, Virginia Smith, Sergey Levine:
Adversarial Unlearning: Reducing Confidence Along Adversarial Directions. NeurIPS 2022 - [c278]Quan Vuong, Aviral Kumar, Sergey Levine, Yevgen Chebotar:
DASCO: Dual-Generator Adversarial Support Constrained Offline Reinforcement Learning. NeurIPS 2022 - [c277]Marvin Zhang, Sergey Levine, Chelsea Finn:
MEMO: Test Time Robustness via Adaptation and Augmentation. NeurIPS 2022 - [i376]Jathushan Rajasegaran, Chelsea Finn, Sergey Levine:
Fully Online Meta-Learning Without Task Boundaries. CoRR abs/2202.00263 (2022) - [i375]Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Chelsea Finn, Sergey Levine:
How to Leverage Unlabeled Data in Offline Reinforcement Learning. CoRR abs/2202.01741 (2022) - [i374]Eric Jang, Alex Irpan, Mohi Khansari, Daniel Kappler, Frederik Ebert, Corey Lynch, Sergey Levine, Chelsea Finn:
BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning. CoRR abs/2202.02005 (2022) - [i373]Sean Chen, Jensen Gao, Siddharth Reddy, Glen Berseth, Anca D. Dragan, Sergey Levine:
ASHA: Assistive Teleoperation via Human-in-the-Loop Reinforcement Learning. CoRR abs/2202.02465 (2022) - [i372]Brandon Trabucco, Xinyang Geng, Aviral Kumar, Sergey Levine:
Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization. CoRR abs/2202.08450 (2022) - [i371]Dhruv Shah, Sergey Levine:
ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints. CoRR abs/2202.11271 (2022) - [i370]Jensen Gao, Siddharth Reddy, Glen Berseth, Nicholas Hardy, Nikhilesh Natraj, Karunesh Ganguly, Anca D. Dragan, Sergey Levine:
X2T: Training an X-to-Text Typing Interface with Online Learning from User Feedback. CoRR abs/2203.02072 (2022) - [i369]Abhishek Gupta, Corey Lynch, Brandon Kinman, Garrett Peake, Sergey Levine, Karol Hausman:
Demonstration-Bootstrapped Autonomous Practicing via Multi-Task Reinforcement Learning. CoRR abs/2203.15755 (2022) - [i368]Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alexander Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J. Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan:
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances. CoRR abs/2204.01691 (2022) - [i367]Ikechukwu Uchendu, Ted Xiao, Yao Lu, Banghua Zhu, Mengyuan Yan, Joséphine Simon, Matthew Bennice, Chuyuan Fu, Cong Ma, Jiantao Jiao, Sergey Levine, Karol Hausman:
Jump-Start Reinforcement Learning. CoRR abs/2204.02372 (2022) - [i366]Aviral Kumar, Joey Hong, Anikait Singh, Sergey Levine:
When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning? CoRR abs/2204.05618 (2022) - [i365]Siddharth Verma, Justin Fu, Mengjiao Yang, Sergey Levine:
CHAI: A CHatbot AI for Task-Oriented Dialogue with Offline Reinforcement Learning. CoRR abs/2204.08426 (2022) - [i364]Homanga Bharadhwaj, Mohammad Babaeizadeh, Dumitru Erhan, Sergey Levine:
INFOrmation Prioritization through EmPOWERment in Visual Model-Based RL. CoRR abs/2204.08585 (2022) - [i363]Charlie Snell, Mengjiao Yang, Justin Fu, Yi Su, Sergey Levine:
Context-Aware Language Modeling for Goal-Oriented Dialogue Systems. CoRR abs/2204.10198 (2022) - [i362]Philippe Hansen-Estruch, Amy Zhang, Ashvin Nair, Patrick Yin, Sergey Levine:
Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning. CoRR abs/2204.13060 (2022) - [i361]Rowan McAllister, Blake Wulfe, Jean Mercat, Logan Ellis, Sergey Levine, Adrien Gaidon:
Control-Aware Prediction Objectives for Autonomous Driving. CoRR abs/2204.13319 (2022) - [i360]Xue Bin Peng, Yunrong Guo, Lina Halper, Sergey Levine, Sanja Fidler:
ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters. CoRR abs/2205.01906 (2022) - [i359]Kuan Fang, Patrick Yin, Ashvin Nair, Sergey Levine:
Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space. CoRR abs/2205.08129 (2022) - [i358]Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine:
Planning with Diffusion for Flexible Behavior Synthesis. CoRR abs/2205.09991 (2022) - [i357]Siddharth Reddy, Sergey Levine, Anca D. Dragan:
First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual Information Maximization. CoRR abs/2205.12381 (2022) - [i356]Xinyang Geng, Hao Liu, Lisa Lee, Dale Schuurams, Sergey Levine, Pieter Abbeel:
Multimodal Masked Autoencoders Learn Transferable Representations. CoRR abs/2205.14204 (2022) - [i355]Amrith Setlur, Benjamin Eysenbach, Virginia Smith, Sergey Levine:
Adversarial Unlearning: Reducing Confidence Along Adversarial Directions. CoRR abs/2206.01367 (2022) - [i354]Benjamin Eysenbach, Soumith Udatha, Sergey Levine, Ruslan Salakhutdinov:
Imitating Past Successes can be Very Suboptimal. CoRR abs/2206.03378 (2022) - [i353]Benjamin Eysenbach, Tianjun Zhang, Ruslan Salakhutdinov, Sergey Levine:
Contrastive Learning as Goal-Conditioned Reinforcement Learning. CoRR abs/2206.07568 (2022) - [i352]Katie Kang, Paula Gradu, Jason J. Choi, Michael Janner, Claire J. Tomlin, Sergey Levine:
Lyapunov Density Models: Constraining Distribution Shift in Learning-Based Control. CoRR abs/2206.10524 (2022) - [i351]Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, Sergey Levine:
Offline RL for Natural Language Generation with Implicit Language Q Learning. CoRR abs/2206.11871 (2022) - [i350]Michael Chang, Thomas L. Griffiths, Sergey Levine:
Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation. CoRR abs/2207.00787 (2022) - [i349]Dibya Ghosh, Anurag Ajay, Pulkit Agrawal, Sergey Levine:
Offline RL Policies Should be Trained to be Adaptive. CoRR abs/2207.02200 (2022) - [i348]Dhruv Shah, Blazej Osinski, Brian Ichter, Sergey Levine:
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action. CoRR abs/2207.04429 (2022) - [i347]Homer Walke, Jonathan Yang, Albert Yu, Aviral Kumar, Jedrzej Orbik, Avi Singh, Sergey Levine:
Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning. CoRR abs/2207.04703 (2022) - [i346]Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Noah Brown, Tomas Jackson, Linda Luu, Sergey Levine, Karol Hausman, Brian Ichter:
Inner Monologue: Embodied Reasoning through Planning with Language Models. CoRR abs/2207.05608 (2022) - [i345]Yandong Ji, Zhongyu Li, Yinan Sun, Xue Bin Peng, Sergey Levine, Glen Berseth, Koushil Sreenath:
Hierarchical Reinforcement Learning for Precise Soccer Shooting Skills using a Quadrupedal Robot. CoRR abs/2208.01160 (2022) - [i344]Marwa Abdulhai, Natasha Jaques, Sergey Levine:
Basis for Intentions: Efficient Inverse Reinforcement Learning using Past Experience. CoRR abs/2208.04919 (2022) - [i343]Laura M. Smith, Ilya Kostrikov, Sergey Levine:
A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free Reinforcement Learning. CoRR abs/2208.07860 (2022) - [i342]Gilbert Feng, Hongbo Zhang, Zhongyu Li, Xue Bin Peng, Bhuvan Basireddy, Linzhu Yue, Zhitao Song, Lizhi Yang, Yunhui Liu, Koushil Sreenath, Sergey Levine:
GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots. CoRR abs/2209.05309 (2022) - [i341]Raj Ghugare, Homanga Bharadhwaj, Benjamin Eysenbach, Sergey Levine, Ruslan Salakhutdinov:
Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective. CoRR abs/2209.08466 (2022) - [i340]Anurag Ajay, Abhishek Gupta, Dibya Ghosh, Sergey Levine, Pulkit Agrawal:
Distributionally Adaptive Meta Reinforcement Learning. CoRR abs/2210.03104 (2022) - [i339]Dhruv Shah, Ajay Sridhar, Arjun Bhorkar, Noriaki Hirose, Sergey Levine:
GNM: A General Navigation Model to Drive Any Robot. CoRR abs/2210.03370 (2022) - [i338]Aviral Kumar, Anikait Singh, Frederik Ebert, Yanlai Yang, Chelsea Finn, Sergey Levine:
Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials. CoRR abs/2210.05178 (2022) - [i337]Kuan Fang, Patrick Yin, Ashvin Nair, Homer Walke, Gengchen Yan, Sergey Levine:
Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks. CoRR abs/2210.06601 (2022) - [i336]Noriaki Hirose, Dhruv Shah, Ajay Sridhar, Sergey Levine:
ExAug: Robot-Conditioned Navigation Policies via Geometric Experience Augmentation. CoRR abs/2210.07450 (2022) - [i335]Annie S. Chen, Archit Sharma, Sergey Levine, Chelsea Finn:
You Only Live Once: Single-Life Reinforcement Learning. CoRR abs/2210.08863 (2022) - [i334]Abhishek Gupta, Aldo Pacchiano, Yuexiang Zhai, Sham M. Kakade, Sergey Levine:
Unpacking Reward Shaping: Understanding the Benefits of Reward Engineering on Sample Complexity. CoRR abs/2210.09579 (2022) - [i333]Hao Liu, Xinyang Geng, Lisa Lee, Igor Mordatch, Sergey Levine, Sharan Narang, Pieter Abbeel:
FCM: Forgetful Causal Masking Makes Causal Language Models Better Zero-Shot Learners. CoRR abs/2210.13432 (2022) - [i332]Ashvin V. Nair, Brian Zhu, Gokul Narayanan, Eugen Solowjow, Sergey Levine:
Learning on the Job: Self-Rewarding Offline-to-Online Finetuning for Industrial Insertion of Novel Connectors from Vision. CoRR abs/2210.15206 (2022) - [i331]Tony Tong Wang, Adam Gleave, Nora Belrose, Tom Tseng, Joseph Miller, Michael D. Dennis, Yawen Duan, Viktor Pogrebniak, Sergey Levine, Stuart Russell:
Adversarial Policies Beat Professional-Level Go AIs. CoRR abs/2211.00241 (2022) - [i330]