Stop the war!
Остановите войну!
for scientists:
default search action
4th GazeIn@ICMI 2012: Santa Monica, CA, USA
- Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction, GazeIn@ICMI 2012, Santa Monica, California, USA, October 26, 2012. ACM 2012, ISBN 978-1-4503-1516-6
- Deepak Khosla, Matthew Keegan, Lei Zhang, Kevin R. Martin, Darrel J. VanBuer, David J. Huber:
Brain-enhanced synergistic attention (BESA). 1:1-1:7 - Christopher McMurrough, Jonathan Rich, Christopher Conly, Vassilis Athitsos, Fillia Makedon:
Multi-modal object of interest detection using eye gaze and RGB-D cameras. 2:1-2:6 - Samer Al Moubayed, Gabriel Skantze:
Perception of gaze direction for situated interaction. 3:1-3:6 - Sean Andrist, Tomislav Pejsa, Bilge Mutlu, Michael Gleicher:
A head-eye coordination model for animating gaze shifts of virtual characters. 4:1-4:6 - Magdalena Rychlowska, Leah Zinner, Serban C. Musca, Paula M. Niedenthal:
From the eye to the heart: eye contact triggers emotion simulation. 5:1-5:7 - Naoya Baba, Hung-Hsuan Huang, Yukiko I. Nakano:
Addressee identification for human-human-agent multiparty conversations in different proxemics. 6:1-6:6 - Hana Vrzakova, Roman Bednarik:
Hard lessons learned: mobile eye-tracking in cockpits. 7:1-7:6 - Kosuke Kimura, Hung-Hsuan Huang, Kyoji Kawagoe:
Analysis on learners' gaze patterns and the instructor's reactions in ballroom dance tutoring. 8:1-8:6 - Kosuke Kabashima, Kristiina Jokinen, Masafumi Nishida, Seiichi Yamamoto:
Multimodal corpus of conversations in mother tongue and second language by same interlocutors. 9:1-9:5 - Roman Bednarik, Shahram Eivazi, Michal Hradis:
Gaze and conversational engagement in multiparty video conversation: an annotation scheme and classification of high and low levels of engagement. 10:1-10:6 - Andres Levitski, Jenni Radun, Kristiina Jokinen:
Visual interaction and conversational activity. 11:1-11:6 - Monika Elepfandt, Martin Grund:
Move it there, or not?: the design of voice commands for gaze with speech. 12:1-12:3 - Tong Cha, Sebastian Maier:
Eye gaze assisted human-computer interaction in a hand gesture controlled multi-display environment. 13:1-13:3 - Zixuan Wang, Jinyun Yan, Hamid K. Aghajan:
A framework of personal assistant for computer users by analyzing video stream. 14:1-14:3 - Saori Yamamoto, Nazomu Teraya, Yumika Nakamura, Narumi Watanabe, Yande Lin, Mayumi Bono, Yugo Takeuchi:
Simple multi-party video conversation system focused on participant eye gaze: "Ptolemaeus" provides participants with smooth turn-taking. 15:1-15:3 - Tobias Schuchert, Sascha Voth, Judith Baumgarten:
Sensing visual attention using an interactive bidirectional HMD. 16:1-16:3 - Erina Ishikawa, Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, Takashi Matsuyama:
Semantic interpretation of eye movements using designed structures of displayed contents. 17:1-17:3 - Yuki Hayashi, Tomoko Kojiri, Toyohide Watanabe:
A communication support interface based on learning awareness for collaborative learning. 18:1-18:3
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.