


default search action
Tetsuya Ogata
Person information
- affiliation: Waseda University, Department of Intermedia Art and Science, Tokyo, Japan
- affiliation (2003 - 2012): Kyoto University. Graduate School of Informatics, Japan
- affiliation (2001 - 2003): RIKEN Brain Science Institute, Wako, Japan
- affiliation (PhD 2000): Waseda University, Tokyo, Japan
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j96]Kento Kawaharazuka, Tatsuya Matsushima, Shuhei Kurita, Chris Paxton, Andy Zeng, Tetsuya Ogata, Tadahiro Taniguchi:
Special issue on real-world robot applications of the foundation models. Adv. Robotics 38(18): 1231 (2024) - [j95]Shardul Kulkarni
, Satoshi Funabashi
, Alexander Schmitz
, Tetsuya Ogata
, Shigeki Sugano
:
Tactile Object Property Recognition Using Geometrical Graph Edge Features and Multi-Thread Graph Convolutional Network. IEEE Robotics Autom. Lett. 9(4): 3894-3901 (2024) - [j94]Gangadhara Naga Sai Gubbala
, Masato Nagashima
, Hiroki Mori
, Young Ah Seong
, Hiroki Sato
, Ryuma Niiyama
, Yuki Suga
, Tetsuya Ogata
:
Augmenting Compliance With Motion Generation Through Imitation Learning Using Drop-Stitch Reinforced Inflatable Robot Arm With Rigid Joints. IEEE Robotics Autom. Lett. 9(10): 8595-8602 (2024) - [j93]Xianbo Cai
, Hiroshi Ito
, Hyogo Hiruma
, Tetsuya Ogata
:
3D Space Perception via Disparity Learning Using Stereo Images and an Attention Mechanism: Real-Time Grasping Motion Generation for Transparent Objects. IEEE Robotics Autom. Lett. 9(12): 11857-11864 (2024) - [j92]Satoshi Funabashi
, Gang Yan, Fei Hongyi
, Alexander Schmitz
, Lorenzo Jamone
, Tetsuya Ogata
, Shigeki Sugano
:
Tactile Transfer Learning and Object Recognition With a Multifingered Hand Using Morphology Specific Convolutional Neural Networks. IEEE Trans. Neural Networks Learn. Syst. 35(6): 7587-7601 (2024) - [c246]Hideyuki Ichiwara, Hiroshi Ito, Kenjiro Yamamoto, Tetsuya Ogata:
Retry-behavior Emergence for Robot-Motion Learning Without Teaching and Subtask Design. AIM 2024: 178-183 - [c245]Abdullah Mustafa, Ryo Hanai, Ixchel Georgina Ramirez-Alpizar, Floris Erich, Ryoichi Nakajo, Yukiyasu Domae, Tetsuya Ogata:
Visual Imitation Learning of Non-Prehensile Manipulation Tasks with Dynamics-Supervised Models. CASE 2024: 3872-3879 - [c244]Naoki Shirakura
, Natsuki Yamanobe
, Tsubasa Maruyama
, Yukiyasu Domae
, Tetsuya Ogata
:
Work Tempo Instruction Framework for Balancing Human Workload and Productivity in Repetitive Task. HRI (Companion) 2024: 980-984 - [c243]Genki Shikada, Simon Armleder, Hiroshi Ito, Gordon Cheng, Tetsuya Ogata:
Real-time Coordinated Motion Generation: A Hierarchical Deep Predictive Learning Model for Bimanual Tasks. IROS 2024: 496-503 - [c242]Takahisa Ueno, Satoshi Funabashi, Hiroshi Ito, Alexander Schmitz, Shardul Kulkarni, Tetsuya Ogata, Shigeki Sugano:
Multi-Fingered Dragging of Unknown Objects and Orientations Using Distributed Tactile Information Through Vision-Transformer and LSTM. IROS 2024: 7445-7452 - [c241]Kanata Suzuki, Tetsuya Ogata:
Sensorimotor Attention and Language-based Regressions in Shared Latent Variables for Integrating Robot Motion Learning and LLM. IROS 2024: 11872-11878 - [c240]Kazuki Hori, Kanata Suzuki, Tetsuya Ogata:
Interactively Robot Action Planning with Uncertainty Analysis and Active Questioning by Large Language Model. SII 2024: 85-91 - [c239]Kenjiro Yamamoto, Hiroshi Ito, Hideyuki Ichiwara, Hiroki Mori
, Tetsuya Ogata:
Real-Time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes. SII 2024: 390-397 - [c238]Hiroto Iino
, Kei Kase, Ryoichi Nakajo, Naoya Chiba, Hiroki Mori
, Tetsuya Ogata:
Generating Long-Horizon Task Actions by Leveraging Predictions of Environmental States. SII 2024: 478-483 - [c237]Suzuka Harada, Ryoichi Nakajo, Kei Kase, Tetsuya Ogata:
Automatic Segmentation of Continuous Time-Series Data Based on Prediction Error Using Deep Predictive Learning. SII 2024: 928-933 - [i41]André Yuji Yasutomi, Hiroki Mori
, Tetsuya Ogata:
A Peg-in-hole Task Strategy for Holes in Concrete. CoRR abs/2403.19946 (2024) - [i40]Kanata Suzuki, Tetsuya Ogata:
Sensorimotor Attention and Language-based Regressions in Shared Latent Variables for Integrating Robot Motion Learning and LLM. CoRR abs/2407.09044 (2024) - [i39]Tamon Miyake, Namiko Saito, Tetsuya Ogata, Yushi Wang, Shigeki Sugano:
Dual-arm Motion Generation for Repositioning Care based on Deep Predictive Learning with Somatosensory Attention Mechanism. CoRR abs/2407.13376 (2024) - [i38]Masaki Yoshikawa, Hiroshi Ito, Tetsuya Ogata:
Achieving Faster and More Accurate Operation of Deep Predictive Learning. CoRR abs/2408.10231 (2024) - [i37]Abdullah Mustafa, Ryo Hanai, Ixchel G. Ramirez, Floris Erich, Ryoichi Nakajo, Yukiyasu Domae, Tetsuya Ogata:
Visual Imitation Learning of Non-Prehensile Manipulation Tasks with Dynamics-Supervised Models. CoRR abs/2410.19379 (2024) - 2023
- [j91]Tomoki Ando
, Hiroto Iino
, Hiroki Mori
, Ryota Torishima, Kuniyuki Takahashi
, Shoichiro Yamaguchi, Daisuke Okanohara, Tetsuya Ogata
:
Learning-based collision-free planning on arbitrary optimization criteria in the latent space through cGANs. Adv. Robotics 37(10): 621-633 (2023) - [j90]André Yuji Yasutomi
, Hideyuki Ichiwara
, Hiroshi Ito
, Hiroki Mori
, Tetsuya Ogata
:
Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions. IEEE Robotics Autom. Lett. 8(3): 1834-1841 (2023) - [j89]Takumi Hara
, Takashi Sato
, Tetsuya Ogata
, Hiromitsu Awano
:
Uncertainty-Aware Haptic Shared Control With Humanoid Robots for Flexible Object Manipulation. IEEE Robotics Autom. Lett. 8(10): 6435-6442 (2023) - [j88]Hideyuki Ichiwara
, Hiroshi Ito
, Kenjiro Yamamoto
, Hiroki Mori
, Tetsuya Ogata
:
Modality Attention for Prediction-Based Robot Motion Generation: Improving Interpretability and Robustness of Using Multi-Modality. IEEE Robotics Autom. Lett. 8(12): 8271-8278 (2023) - [c236]André Yuji Yasutomi, Tetsuya Ogata:
Automatic Action Space Curriculum Learning with Dynamic Per-Step Masking. CASE 2023: 1-7 - [c235]Kanata Suzuki, Yuya Kamiwano, Naoya Chiba, Hiroki Mori
, Tetsuya Ogata:
Multi-Timestep-Ahead Prediction with Mixture of Experts for Embodied Question Answering. ICANN (6) 2023: 243-255 - [c234]Ryutaro Suzuki, Hayato Idei, Yuichi Yamashita, Tetsuya Ogata:
Hierarchical Variational Recurrent Neural Network Modeling of Sensory Attenuation with Temporal Delay in Action-Outcome. ICDL 2023: 244-249 - [c233]Hideyuki Ichiwara, Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori
, Tetsuya Ogata:
Multimodal Time Series Learning of Robots Based on Distributed and Integrated Modalities: Verification with a Simulator and Actual Robots. ICRA 2023: 9551-9557 - [c232]Namiko Saito, João Moura, Tetsuya Ogata, Marina Y. Aoyama, Shingo Murata, Shigeki Sugano, Sethu Vijayakumar:
Structured Motion Generation with Predictive Learning: Proposing Subgoal for Long-Horizon Manipulation. ICRA 2023: 9566-9572 - [c231]Ryo Hanai, Yukiyasu Domae
, Ixchel Georgina Ramirez-Alpizar, Bruno Leme, Tetsuya Ogata:
Force Map: Learning to Predict Contact Force Distribution from Vision. IROS 2023: 3129-3136 - [i36]Ryo Hanai, Yukiyasu Domae, Ixchel Georgina Ramirez-Alpizar, Bruno Leme, Tetsuya Ogata:
Force Map: Learning to Predict Contact Force Distribution from Vision. CoRR abs/2304.05803 (2023) - [i35]Kanata Suzuki, Hiroshi Ito, Tatsuro Yamada, Kei Kase, Tetsuya Ogata:
Deep Predictive Learning : Motion Learning Concept inspired by Cognitive Robotics. CoRR abs/2306.14714 (2023) - [i34]Kazuki Hori, Kanata Suzuki, Tetsuya Ogata:
Interactively Robot Action Planning with Uncertainty Analysis and Active Questioning by Large Language Model. CoRR abs/2308.15684 (2023) - [i33]Kenjiro Yamamoto, Hiroshi Ito, Hideyuki Ichiwara, Hiroki Mori
, Tetsuya Ogata:
Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes. CoRR abs/2309.12547 (2023) - [i32]Namiko Saito, Mayu Hiramoto, Ayuna Kubo, Kanata Suzuki, Hiroshi Ito, Shigeki Sugano, Tetsuya Ogata:
Realtime Motion Generation with Active Perception Using Attention Mechanism for Cooking Robot. CoRR abs/2309.14837 (2023) - [i31]André Yuji Yasutomi, Hideyuki Ichiwara, Hiroshi Ito, Hiroki Mori
, Tetsuya Ogata:
Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions. CoRR abs/2312.16438 (2023) - 2022
- [j87]Tadahiro Taniguchi, Takayuki Nagai, Shingo Shimoda, Angelo Cangelosi, Yiannis Demiris
, Yutaka Matsuo, Kenji Doya, Tetsuya Ogata, Lorenzo Jamone, Yukie Nagai, Emre Ugur
, Daichi Mochihashi, Yuuya Unno, Kazuo Okanoya, Takashi Hashimoto:
Special issue on Symbol Emergence in Robotics and Cognitive Systems (I). Adv. Robotics 36(1-2): 1-2 (2022) - [j86]Tadahiro Taniguchi, Takayuki Nagai, Shingo Shimoda, Angelo Cangelosi, Yiannis Demiris
, Yutaka Matsuo, Kenji Doya, Tetsuya Ogata, Lorenzo Jamone, Yukie Nagai, Emre Ugur, Daichi Mochihashi, Yuuya Unno, Kazuo Okanoya, Takashi Hashimoto:
Special issue on symbol emergence in robotics and cognitive systems (II). Adv. Robotics 36(5-6): 217-218 (2022) - [j85]Namiko Saito
, Takumi Shimizu, Tetsuya Ogata
, Shigeki Sugano
:
Utilization of Image/Force/Tactile Sensor Data for Object-Shape-Oriented Manipulation: Wiping Objects With Turning Back Motions and Occlusion. IEEE Robotics Autom. Lett. 7(2): 968-975 (2022) - [j84]Satoshi Funabashi
, Tomoki Isobe, Fei Hongyi, Atsumu Hiramoto, Alexander Schmitz
, Shigeki Sugano
, Tetsuya Ogata
:
Multi-Fingered In-Hand Manipulation With Various Object Properties Using Graph Convolutional Networks and Distributed Tactile Sensors. IEEE Robotics Autom. Lett. 7(2): 2102-2109 (2022) - [j83]Kei Kase
, Ai Tateishi, Tetsuya Ogata
:
Robot Task Learning With Motor Babbling Using Pseudo Rehearsal. IEEE Robotics Autom. Lett. 7(3): 8377-8382 (2022) - [j82]Hyogo Hiruma
, Hiroshi Ito
, Hiroki Mori
, Tetsuya Ogata
:
Deep Active Visual Attention for Real-Time Robot Motion Generation: Emergence of Tool-Body Assimilation and Adaptive Tool-Use. IEEE Robotics Autom. Lett. 7(3): 8550-8557 (2022) - [j81]Minori Toyoda
, Kanata Suzuki
, Yoshihiko Hayashi
, Tetsuya Ogata
:
Learning Bidirectional Translation Between Descriptions and Actions With Small Paired Data. IEEE Robotics Autom. Lett. 7(4): 10930-10937 (2022) - [j80]Hiroshi Ito
, Kenjiro Yamamoto
, Hiroki Mori
, Tetsuya Ogata
:
Efficient multitask learning with an embodied predictive model for door opening and entry with whole-body control. Sci. Robotics 7(65) (2022) - [c230]Naoki Shirakura, Ryuichi Takase, Natsuki Yamanobe, Yukiyasu Domae
, Tetsuya Ogata:
Time Pressure Based Human Workload and Productivity Compatible System for Human-Robot Collaboration. CASE 2022: 659-666 - [c229]Ryosuke Yamada, Hirokatsu Kataoka, Naoya Chiba, Yukiyasu Domae
, Tetsuya Ogata:
Point Cloud Pre-training with Natural 3D Structures. CVPR 2022: 21251-21261 - [c228]Hideyuki Ichiwara, Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori
, Tetsuya Ogata:
Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility. ICRA 2022: 5375-5381 - [c227]Hiroshi Ito, Hideyuki Ichiwara, Kenjiro Yamamoto, Hiroki Mori
, Tetsuya Ogata:
Integrated Learning of Robot Motion and Sentences: Real-Time Prediction of Grasping Motion and Attention based on Language Instructions. ICRA 2022: 5404-5410 - [c226]Hyogo Hiruma, Hiroki Mori
, Hiroshi Ito, Tetsuya Ogata:
Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Prediction for Robot Pose Prediction. IECON 2022: 1-6 - [c225]Kei Kase, Chikara Utsumi, Yukiyasu Domae
, Tetsuya Ogata:
Use of Action Label in Deep Predictive Learning for Robot Manipulation. IROS 2022: 13459-13465 - [c224]Hiroshi Ito, Takumi Kurata, Tetsuya Ogata:
Sensory-Motor Learning for Simultaneous Control of Motion and Force: Generating Rubbing Motion against Uneven Object. SII 2022: 408-415 - [c223]Pin-Chu Yang, Satoshi Funabashi, Mohammed Al-Sada, Tetsuya Ogata:
Generating Humanoid Robot Motions based on a Procedural Animation IK Rig Method. SII 2022: 491-498 - [c222]Wakana Fujii, Kanata Suzuki, Tomoki Ando, Ai Tateishi, Hiroki Mori
, Tetsuya Ogata:
Buttoning Task with a Dual-Arm Robot: An Exploratory Study on a Marker-based Algorithmic Method and Marker-less Machine Learning Methods. SII 2022: 682-689 - [c221]André Yuji Yasutomi
, Hiroki Mori
, Tetsuya Ogata:
Curriculum-based Offline Network Training for Improvement of Peg-in-hole Task Performance for Holes in Concrete. SII 2022: 712-717 - [i30]Tomoki Ando, Hiroki Mori, Ryota Torishima, Kuniyuki Takahashi, Shoichiro Yamaguchi, Daisuke Okanohara, Tetsuya Ogata:
Collision-free Path Planning in the Latent Space through cGANs. CoRR abs/2202.07203 (2022) - [i29]Hyogo Hiruma, Hiroki Mori, Tetsuya Ogata:
Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Information for Robot Pose Prediction. CoRR abs/2202.10036 (2022) - [i28]Tomoki Ando, Hiroto Iino, Hiroki Mori, Ryota Torishima, Kuniyuki Takahashi, Shoichiro Yamaguchi, Daisuke Okanohara, Tetsuya Ogata:
Collision-free Path Planning on Arbitrary Optimization Criteria in the Latent Space through cGANs. CoRR abs/2202.13062 (2022) - [i27]Minori Toyoda, Kanata Suzuki, Yoshihiko Hayashi, Tetsuya Ogata:
Learning Bidirectional Translation between Descriptions and Actions with Small Paired Data. CoRR abs/2203.04218 (2022) - [i26]Satoshi Funabashi, Tomoki Isobe, Fei Hongyi, Atsumu Hiramoto, Alexander Schmitz, Shigeki Sugano, Tetsuya Ogata:
Multi-Fingered In-Hand Manipulation with Various Object Properties Using Graph Convolutional Networks and Distributed Tactile Sensors. CoRR abs/2205.04169 (2022) - [i25]Hyogo Hiruma, Hiroshi Ito, Hiroki Mori
, Tetsuya Ogata:
Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use. CoRR abs/2206.14530 (2022) - 2021
- [j79]Namiko Saito, Tetsuya Ogata, Hiroki Mori
, Shingo Murata, Shigeki Sugano:
Tool-Use Model to Reproduce the Goal Situations Considering Relationship Among Tools, Objects, Actions and Effects Using Multimodal Deep Neural Networks. Frontiers Robotics AI 8: 748716 (2021) - [j78]Kei Kase, Noboru Matsumoto, Tetsuya Ogata:
Leveraging Motor Babbling for Efficient Robot Learning. J. Robotics Mechatronics 33(5): 1063-1074 (2021) - [j77]Hayato Idei
, Shingo Murata
, Yuichi Yamashita
, Tetsuya Ogata
:
Paradoxical sensory reactivity induced by functional disconnection in a robot model of neurodevelopmental disorder. Neural Networks 138: 150-163 (2021) - [j76]Namiko Saito
, Tetsuya Ogata
, Satoshi Funabashi
, Hiroki Mori
, Shigeki Sugano
:
How to Select and Use Tools? : Active Perception of Target Objects Using Multimodal Deep Learning. IEEE Robotics Autom. Lett. 6(2): 2517-2524 (2021) - [j75]Kanata Suzuki
, Hiroki Mori
, Tetsuya Ogata
:
Compensation for Undefined Behaviors During Robot Task Execution by Switching Controllers Depending on Embedded Dynamics in RNN. IEEE Robotics Autom. Lett. 6(2): 3475-3482 (2021) - [j74]Minori Toyoda
, Kanata Suzuki
, Hiroki Mori
, Yoshihiko Hayashi
, Tetsuya Ogata
:
Embodying Pre-Trained Word Embeddings Through Robot Actions. IEEE Robotics Autom. Lett. 6(2): 4225-4232 (2021) - [j73]Momomi Kanamura, Kanata Suzuki
, Yuki Suga, Tetsuya Ogata:
Development of a Basic Educational Kit for Robotic System with Deep Neural Networks. Sensors 21(11): 3804 (2021) - [c220]Mohammed Al-Sada, Pin-Chu Yang, Chang-Chieh Chiu, Tito Pradhono Tomo, MHD Yamen Saraiji, Tetsuya Ogata, Tatsuo Nakajima:
From Anime To Reality: Embodying An Anime Character As A Humanoid Robot. CHI Extended Abstracts 2021: 176:1-176:5 - [c219]André Yuji Yasutomi
, Hiroki Mori
, Tetsuya Ogata:
A Peg-in-hole Task Strategy for Holes in Concrete. ICRA 2021: 2205-2211 - [c218]Ryoichi Nakajo, Tetsuya Ogata:
Comparison of Consolidation Methods for Predictive Learning of Time Series. IEA/AIE (1) 2021: 113-120 - [c217]Satoshi Ohara, Tetsuya Ogata, Hiromitsu Awano
:
Binary Neural Network in Robotic Manipulation: Flexible Object Manipulation for Humanoid Robot Using Partially Binarized Auto-Encoder on FPGA. IROS 2021: 6010-6015 - [c216]Kanata Suzuki, Momomi Kanamura, Yuki Suga, Hiroki Mori
, Tetsuya Ogata:
In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning. IROS 2021: 6724-6731 - [i24]Kanata Suzuki, Tetsuya Ogata:
Stable deep reinforcement learning method by predicting uncertainty in rewards as a subtask. CoRR abs/2101.06906 (2021) - [i23]Hideyuki Ichiwara, Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori, Tetsuya Ogata:
Spatial Attention Point Network for Deep-learning-based Robust Autonomous Robot Motion Generation. CoRR abs/2103.01598 (2021) - [i22]Kanata Suzuki, Momomi Kanamura, Yuki Suga, Hiroki Mori, Tetsuya Ogata:
In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning. CoRR abs/2103.09402 (2021) - [i21]Minori Toyoda, Kanata Suzuki, Hiroki Mori, Yoshihiko Hayashi, Tetsuya Ogata:
Embodying Pre-Trained Word Embeddings Through Robot Actions. CoRR abs/2104.08521 (2021) - [i20]Namiko Saito, Tetsuya Ogata, Satoshi Funabashi, Hiroki Mori, Shigeki Sugano:
How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning. CoRR abs/2106.02445 (2021) - [i19]Satoshi Ohara, Tetsuya Ogata, Hiromitsu Awano:
Binary Neural Network in Robotic Manipulation: Flexible Object Manipulation for Humanoid Robot Using Partially Binarized Auto-Encoder on FPGA. CoRR abs/2107.00209 (2021) - [i18]Hayato Idei, Wataru Ohata, Yuichi Yamashita, Tetsuya Ogata, Jun Tani:
Sensory attenuation develops as a result of sensorimotor experience. CoRR abs/2111.02666 (2021) - [i17]Hideyuki Ichiwara, Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori, Tetsuya Ogata:
Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility. CoRR abs/2112.06442 (2021) - 2020
- [j72]Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori
, Tetsuya Ogata:
Evaluation of Generalization Performance of Visuo-Motor Learning by Analyzing Internal State Structured from Robot Motion. New Gener. Comput. 38(1): 7-22 (2020) - [c215]Hiroki Mori
, Masayuki Masuda, Tetsuya Ogata:
Tactile-based curiosity maximizes tactile-rich object-oriented actions even without any extrinsic rewards. ICDL-EPIROB 2020: 1-7 - [c214]Kanata Suzuki, Tetsuya Ogata:
Stable Deep Reinforcement Learning Method by Predicting Uncertainty in Rewards as a Subtask. ICONIP (2) 2020: 651-662 - [c213]Kei Kase, Chris Paxton, Hammad Mazhar, Tetsuya Ogata, Dieter Fox:
Transferable Task Execution from Pixels through Deep Planning Domain Learning. ICRA 2020: 10459-10465 - [c212]Satoshi Funabashi, Tomoki Isobe, Shun Ogasa, Tetsuya Ogata, Alexander Schmitz, Tito Pradhono Tomo, Shigeki Sugano:
Stable In-Grasp Manipulation with a Low-Cost Robot Hand by Using 3-Axis Tactile Sensors with a CNN. IROS 2020: 9166-9173 - [c211]Satoshi Funabashi, Shun Ogasa, Tomoki Isobe, Tetsuya Ogata, Alexander Schmitz, Tito Pradhono Tomo, Shigeki Sugano:
Variable In-Hand Manipulations for Tactile-Driven Robot Hand via CNN-LSTM. IROS 2020: 9472-9479 - [c210]Namiko Saito, Danyang Wang, Tetsuya Ogata, Hiroki Mori
, Shigeki Sugano:
Wiping 3D-objects using Deep Learning Model based on Image/Force/Joint Information. IROS 2020: 10152-10157 - [c209]Kelvin Lukman, Hiroki Mori, Tetsuya Ogata:
Viewpoint Planning Based on Uncertainty Maps Created from the Generative Query Network. JSAI 2020: 37-48 - [c208]Pin-Chu Yang, Mohammed Al-Sada, Chang-Chieh Chiu, Kevin Kuo, Tito Pradhono Tomo, Kanata Suzuki, Nelson Yalta
, Kuo-Hao Shu, Tetsuya Ogata:
HATSUKI : An anime character like robot figure platform with anime-style expressions and imitation learning based action generation. RO-MAN 2020: 384-391 - [c207]Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori
, Shuki Goto, Tetsuya Ogata:
Visualization of Focal Cues for Visuomotor Coordination by Gradient-based Methods: A Recurrent Neural Network Shifts The Attention Depending on Task Requirements. SII 2020: 188-194 - [c206]Momomi Kanamura, Yuki Suga, Tetsuya Ogata:
Development of a Basic Educational Kit for Robot Development Using Deep Neural Networks. SII 2020: 1360-1363 - [i16]Kei Kase, Chris Paxton, Hammad Mazhar, Tetsuya Ogata, Dieter Fox:
Transferable Task Execution from Pixels through Deep Planning Domain Learning. CoRR abs/2003.03726 (2020) - [i15]Kanata Suzuki, Hiroki Mori, Tetsuya Ogata:
Undefined-behavior guarantee by switching to model-based controller according to the embedded dynamics in Recurrent Neural Network. CoRR abs/2003.04862 (2020) - [i14]Pin-Chu Yang, Mohammed Al-Sada, Chang-Chieh Chiu, Kevin Kuo, Tito Pradhono Tomo, Kanata Suzuki, Nelson Yalta, Kuo-Hao Shu, Tetsuya Ogata:
HATSUKI : An anime character like robot figure platform with anime-style expressions and imitation learning based action generation. CoRR abs/2003.14121 (2020)
2010 – 2019
- 2019
- [j71]Fady Ibrahim
, A. A. Abouelsoud, Ahmed M. R. Fath El-Bab, Tetsuya Ogata
:
Path following algorithm for skid-steering mobile robot based on adaptive discontinuous posture control. Adv. Robotics 33(9): 439-453 (2019) - [j70]Junpei Zhong
, Martin Peniak, Jun Tani, Tetsuya Ogata
, Angelo Cangelosi:
Sensorimotor input as a language generalisation tool: a neurorobotics model for generation and generalisation of noun-verb combinations with sensorimotor inputs. Auton. Robots 43(5): 1271-1290 (2019) - [j69]Junpei Zhong
, Tetsuya Ogata
, Angelo Cangelosi
, Chenguang Yang
:
Disentanglement in conceptual space during sensorimotor interaction. Cogn. Comput. Syst. 1(4): 103-112 (2019) - [j68]Tadahiro Taniguchi, Emre Ugur
, Tetsuya Ogata, Takayuki Nagai, Yiannis Demiris
:
Editorial: Machine Learning Methods for High-Level Cognitive Capabilities in Robotics. Frontiers Neurorobotics 13: 83 (2019) - [j67]Fady Ibrahim
, A. A. Abouelsoud, Ahmed M. R. Fath El-Bab
, Tetsuya Ogata
:
Discontinuous Stabilizing Control of Skid-Steering Mobile Robot (SSMR). J. Intell. Robotic Syst. 95(2): 253-266 (2019) - [j66]Kazuma Sasaki
, Tetsuya Ogata
:
Adaptive Drawing Behavior by Visuomotor Learning Using Recurrent Neural Networks. IEEE Trans. Cogn. Dev. Syst. 11(1): 119-128 (2019) - [c205]Nelson Yalta
, Shinji Watanabe
, Takaaki Hori, Kazuhiro Nakadai, Tetsuya Ogata:
CNN-based Multichannel End-to-End Speech Recognition for Everyday Home Environments*. EUSIPCO 2019: 1-5 - [c204]Shingo Murata, Hiroki Sawa, Shigeki Sugano
, Tetsuya Ogata:
Looking Back and Ahead: Adaptation and Planning by Gradient Descent. ICDL-EPIROB 2019: 151-156 - [c203]Shingo Murata, Wataru Masuda, Jiayi Chen, Hiroaki Arie, Tetsuya Ogata, Shigeki Sugano
:
Achieving Human-Robot Collaboration with Dynamic Goal Inference by Gradient Descent. ICONIP (2) 2019: 579-590 - [c202]Satoshi Funabashi, Gang Yan, Andreas Geier, Alexander Schmitz, Tetsuya Ogata, Shigeki Sugano
:
Morphology-Specific Convolutional Neural Networks for Tactile Object Recognition with a Multi-Fingered Hand. ICRA 2019: 57-63 - [c201]Nelson Yalta
, Shinji Watanabe
, Kazuhiro Nakadai, Tetsuya Ogata:
Weakly-Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation. IJCNN 2019: 1-8 - [c200]Alexandre Antunes, Alban Laflaquière, Tetsuya Ogata, Angelo Cangelosi:
A Bi-directional Multiple Timescales LSTM Model for Grounding of Actions and Verbs. IROS 2019: 2614-2621 - [c199]Kei Kase, Ryoichi Nakajo, Hiroki Mori
, Tetsuya Ogata:
Learning Multiple Sensorimotor Units to Complete Compound Tasks using an RNN with Multiple Attractors. IROS 2019: 4244-4249 - [c198]Namiko Saito, Nguyen Ba Dai, Tetsuya Ogata, Hiroki Mori
, Shigeki Sugano
:
Real-time Liquid Pouring Motion Generation: End-to-End Sensorimotor Coordination for Unknown Liquid Dynamics Trained with Deep Neural Networks. ROBIO 2019: 1077-1082 - [c197]Shingo Murata, Hikaru Yanagida, Kentaro Katahira, Shinsuke Suzuki, Tetsuya Ogata, Yuichi Yamashita:
Large-scale Data Collection for Goal-directed Drawing Task with Self-report Psychiatric Symptom Questionnaires via Crowdsourcing. SMC 2019: 3859-3865 - [i13]Andrey Barsky, Claudio Zito, Hiroki Mori, Tetsuya Ogata, Jeremy L. Wyatt:
Multisensory Learning Framework for Robot Drumming. CoRR abs/1907.09775 (2019) - [i12]Lorenzo Jamone, Tetsuya Ogata, Beata J. Grzyb:
From natural to artificial embodied intelligence: is Deep Learning the solution (NII Shonan Meeting 137). NII Shonan Meet. Rep. 2019 (2019) - 2018
- [j65]Chyon Hae Kim