default search action
ndt CoRL 2018: Zürich, Switzerland
- 2nd Annual Conference on Robot Learning, CoRL 2018, Zürich, Switzerland, 29-31 October 2018, Proceedings. Proceedings of Machine Learning Research 87, PMLR 2018
- Matthias Müller, Alexey Dosovitskiy, Bernard Ghanem, Vladlen Koltun:
Driving Policy Transfer via Modularity and Abstraction. 1-15 - Eshed Ohn-Bar, Kris Kitani, Chieko Asakawa:
Personalized Dynamics Models for Adaptive Assistive Navigation Systems. 16-39 - Annie Xie, Avi Singh, Sergey Levine, Chelsea Finn:
Few-Shot Goal Inference for Visuomotor Learning and Planning. 40-52 - Abhishek Das, Georgia Gkioxari, Stefan Lee, Devi Parikh, Dhruv Batra:
Neural Modular Control for Embodied Question Answering. 53-62 - Jianwei Yang, Jiasen Lu, Stefan Lee, Dhruv Batra, Devi Parikh:
Visual Curiosity: Learning to Ask Questions to Learn Visual Recognition. 63-80 - Haonan Yu, Xiaochen Lian, Haichao Zhang, Wei Xu:
Guided Feature Transformation (GFT): A Neural Language Grounding Module for Embodied Agents. 81-98 - Eric Jang, Coline Devin, Vincent Vanhoucke, Sergey Levine:
Grasp2Vec: Learning Object Representations from Self-Supervised Grasping. 99-112 - Rui Zhao, Volker Tresp:
Energy-Based Hindsight Experience Prioritization. 113-122 - Dylan P. Losey, Marcia K. O'Malley:
Including Uncertainty when Learning from Human Corrections. 123-132 - Elia Kaufmann, Antonio Loquercio, René Ranftl, Alexey Dosovitskiy, Vladlen Koltun, Davide Scaramuzza:
Deep Drone Racing: Learning Agile Flight in Dynamic Environments. 133-145 - Bin Yang, Ming Liang, Raquel Urtasun:
HDNET: Exploiting HD Maps for 3D Object Detection. 146-155 - Artemij Amiranashvili, Alexey Dosovitskiy, Vladlen Koltun, Thomas Brox:
Motion Perception in Reinforcement Learning with Dynamic Objects. 156-168 - Péter Karkus, David Hsu, Wee Sun Lee:
Particle Filter Networks with Application to Visual Localization. 169-178 - John D. Martin, Jinkun Wang, Brendan J. Englot:
Sparse Gaussian Process Temporal Difference Learning for Marine Robot Navigation. 179-189 - Vitor Guizilini, Fabio Ramos:
Fast 3D Modeling with Approximated Convolutional Kernels. 190-199 - Vitor Guizilini, Fabio Ramos:
Unpaired Learning of Dense Visual Depth Estimators for Urban Environments. 200-212 - Gregory J. Stein, Christopher Bradley, Nicholas Roy:
Learning over Subgoals for Efficient Navigation of Structured, Unknown Environments. 213-222 - Guru Subramani, Michael R. Zinn, Michael Gleicher:
Inferring geometric constraints in human demonstrations. 223-236 - Axel Sauer, Nikolay Savinov, Andreas Geiger:
Conditional Affordance Learning for Driving in Urban Environments. 237-252 - Patrick Wenzel, Qadeer Khan, Daniel Cremers, Laura Leal-Taixé:
Modular Vehicle Control for Transferring Semantic Information Between Weather Conditions Using GANs. 253-269 - Jacky Liang, Viktor Makoviychuk, Ankur Handa, Nuttapong Chentanez, Miles Macklin, Dieter Fox:
GPU-Accelerated Robotic Simulation for Distributed Reinforcement Learning. 270-282 - Arash K. Ushani, Ryan M. Eustice:
Feature Learning for Scene Flow Estimation from LIDAR. 283-292 - Anirudha Majumdar, Maxwell Goldstein:
PAC-Bayes Control: Synthesizing Controllers that Provably Generalize to Novel Environments. 293-305 - Jonathan Tremblay, Thang To, Balakumar Sundaralingam, Yu Xiang, Dieter Fox, Stan Birchfield:
Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects. 306-316 - Connor Schenck, Dieter Fox:
SPNets: Differentiable Fluid Dynamics for Deep Neural Networks. 317-335 - Maria Bauzá, Francois Robert Hogan, Alberto Rodriguez:
A Data-Efficient Approach to Precise and Controlled Pushing. 336-345 - Jake Bruce, Niko Sünderhauf, Piotr Mirowski, Raia Hadsell, Michael Milford:
Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal. 346-361 - Daniel S. Brown, Yuchen Cui, Scott Niekum:
Risk-Aware Active Inverse Reinforcement Learning. 362-372 - Peter R. Florence, Lucas Manuelli, Russ Tedrake:
Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation. 373-385 - Philippe Morere, Fabio Ramos:
Bayesian RL for Goal-Only Rewards. 386-398 - Eugene Vinitsky, Aboudy Kreidieh, Luc Le Flem, Nishant Kheterpal, Kathy Jang, Cathy Wu, Fangyu Wu, Richard Liaw, Eric Liang, Alexandre M. Bayen:
Benchmarks for reinforcement learning in mixed-autonomy traffic. 399-409 - Fan Wang, Bo Zhou, Ke Chen, Tingxiang Fan, Xi Zhang, Jiangyong Li, Hao Tian, Jia Pan:
Intervention Aided Reinforcement Learning for Safe and Practical Policy Optimization in Navigation. 410-421 - Ricson Cheng, Arpit Agarwal, Katerina Fragkiadaki:
Reinforcement Learning of Active Vision for Manipulating Objects under Occlusions. 422-431 - Clement Gehring, Leslie Pack Kaelbling, Tomás Lozano-Pérez:
Adaptable replanning with compressed linear action models for learning from demonstrations. 432-442 - Ransalu Senanayake, Anthony Tompkins, Fabio Ramos:
Automorphing Kernels for Nonstationarity in Mapping Unstructured Environments. 443-455 - Paul-Edouard Sarlin, Frédéric Debraine, Marcin Dymczyk, Roland Siegwart:
Leveraging Deep Visual Descriptors for Hierarchical Efficient Localization. 456-465 - Spencer M. Richards, Felix Berkenkamp, Andreas Krause:
The Lyapunov Neural Network: Adaptive Stability Certification for Safe Learning of Dynamical Systems. 466-476 - Marcus Gualtieri, Robert Platt Jr.:
Learning 6-DoF Grasping and Pick-Place Using Attention Focus. 477-486 - Adrien Laversanne-Finot, Alexandre Péré, Pierre-Yves Oudeyer:
Curiosity Driven Exploration of Learned Disentangled Goal Spaces. 487-504 - Valts Blukis, Dipendra Kumar Misra, Ross A. Knepper, Yoav Artzi:
Mapping Navigation Instructions to Continuous Control Actions with Position-Visitation Prediction. 505-518 - Erdem Biyik, Dorsa Sadigh:
Batch Active Preference-Based Learning of Reward Functions. 519-528 - Samuel Clarke, Travers Rhodes, Christopher G. Atkeson, Oliver Kroemer:
Learning Audio Feedback for Estimating Amount and Flow of Granular Material. 529-550 - Yun Long, Xueyuan She, Saibal Mukhopadhyay:
HybridNet: Integrating Model-based and Data-driven Learning to Predict Evolution of Dynamical Systems. 551-560 - A. Rupam Mahmood, Dmytro Korenkevych, Gautham Vasan, William Ma, James Bergstra:
Benchmarking Reinforcement Learning Algorithms on Real-World Robots. 561-591 - Tanmay Shankar, Nicholas Rhinehart, Katharina Muelling, Kris M. Kitani:
Learning Neural Parsers with Deterministic Differentiable Imitation Learning. 592-604 - Ioan Andrei Barsan, Shenlong Wang, Andrei Pokrovsky, Raquel Urtasun:
Learning to Localize Using a LiDAR Intensity Map. 605-616 - Ignasi Clavera, Jonas Rothfuss, John Schulman, Yasuhiro Fujita, Tamim Asfour, Pieter Abbeel:
Model-Based Reinforcement Learning via Meta-Policy Optimization. 617-629 - Guilherme Maeda, Okan Koc, Jun Morimoto:
Reinforcement Learning of Phase Oscillators for Fast Adaptation to Moving Targets. 630-640 - Rika Antonova, Mia Kokic, Johannes A. Stork, Danica Kragic:
Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation. 641-650 - Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, Sergey Levine:
Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation. 651-673 - Joshua Romoff, Peter Henderson, Alexandre Piché, Vincent François-Lavet, Joelle Pineau:
Reward Estimation for Variance Reduction in Deep Reinforcement Learning. 674-699 - Fabio Muratore, Felix Treede, Michael Gienger, Jan Peters:
Domain Randomization for Simulation-Based Policy Optimization with Transferability Assessment. 700-713 - Daniel Nyga, Subhro Roy, Rohan Paul, Daehyung Park, Mihai Pomarlan, Michael Beetz, Nicholas Roy:
Grounding Robot Plans from Natural Language Instructions with Incomplete World Knowledge. 714-723 - Rohan Chitnis, Leslie Pack Kaelbling, Tomás Lozano-Pérez:
Learning What Information to Give in Partially Observed Domains. 724-733 - Jan Matas, Stephen James, Andrew J. Davison:
Sim-to-Real Reinforcement Learning for Deformable Object Manipulation. 734-743 - Visak C. V. Kumar, Sehoon Ha, C. Karen Liu:
Expanding Motor Skills using Relay Networks. 744-756 - Ajinkya Jain, Scott Niekum:
Efficient Hierarchical Robot Motion Planning Under Uncertainty and Hybrid Dynamics. 757-766 - Linxi Fan, Yuke Zhu, Jiren Zhu, Zihua Liu, Orien Zeng, Anchit Gupta, Joan Creus-Costa, Silvio Savarese, Li Fei-Fei:
SURREAL: Open-Source Reinforcement Learning Framework and Robot Manipulation Benchmark. 767-782 - Stephen James, Michael Bloesch, Andrew J. Davison:
Task-Embedded Control Networks for Few-Shot Imitation Learning. 783-795 - Andreea Bobu, Andrea Bajcsy, Jaime F. Fisac, Anca D. Dragan:
Learning under Misspecified Objective Spaces. 796-805 - Gregory Kahn, Adam Villaflor, Pieter Abbeel, Sergey Levine:
Composable Action-Conditioned Predictors: Flexible Off-Policy Learning for Robot Navigation. 806-816 - Florian Golemo, Adrien Ali Taïga, Aaron C. Courville, Pierre-Yves Oudeyer:
Sim-to-Real Transfer with Neural-Augmented Robot Simulation. 817-828 - Tixiao Shan, Jinkun Wang, Brendan J. Englot, Kevin J. Doherty:
Bayesian Generalized Kernel Inference for Terrain Traversability Mapping. 829-838 - Rituraj Kaushik, Konstantinos I. Chatzilygeroudis, Jean-Baptiste Mouret:
Multi-objective Model-based Policy Search for Data-efficient Learning with Sparse Rewards. 839-855 - Ferran Alet, Tomás Lozano-Pérez, Leslie Pack Kaelbling:
Modular meta-learning. 856-868 - Theodoros Stouraitis, Iordanis Chatzinikolaidis, Michael Gienger, Sethu Vijayakumar:
Dyadic collaborative Manipulation through Hybrid Trajectory Optimization. 869-878 - Ajay Mandlekar, Yuke Zhu, Animesh Garg, Jonathan Booher, Max Spero, Albert Tung, Julian Gao, John Emmons, Anchit Gupta, Emre Orbay, Silvio Savarese, Li Fei-Fei:
ROBOTURK: A Crowdsourcing Platform for Robotic Skill Learning through Imitation. 879-893 - Yanfu Zhang, Wenshan Wang, Rogerio Bonatti, Daniel Maturana, Sebastian A. Scherer:
Integrating kinematics and environment context into deep inverse reinforcement learning for predicting off-road vehicle trajectories. 894-905 - Pratyusha Sharma, Lekha Mohan, Lerrel Pinto, Abhinav Gupta:
Multiple Interactions Made Easy (MIME): Large Scale Demonstrations Data for Imitation. 906-915 - Atil Iscen, Ken Caluwaerts, Jie Tan, Tingnan Zhang, Erwin Coumans, Vikas Sindhwani, Vincent Vanhoucke:
Policies Modulating Trajectory Generators. 916-926 - Nadia Figueroa, Aude Billard:
A Physically-Consistent Bayesian Non-Parametric Mixture Model for Dynamical System Learning. 927-946 - Sergio Casas, Wenjie Luo, Raquel Urtasun:
IntentNet: Learning to Predict Intention from Raw Sensor Data. 947-956 - Yordan Hristov, Alex Lascarides, Subramanian Ramamoorthy:
Interpretable Latent Spaces for Learning from Demonstration. 957-968 - Henri Rebecq, Daniel Gehrig, Davide Scaramuzza:
ESIM: an Open Event Camera Simulator. 969-982 - Frederik Ebert, Sudeep Dasari, Alex X. Lee, Sergey Levine, Chelsea Finn:
Robustness via Retrying: Closed-Loop Robotic Manipulation with Self-Supervised Learning. 983-993
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.