


default search action
15th ICMI 2013: Sydney, NSW, Australia
- Julien Epps, Fang Chen, Sharon L. Oviatt, Kenji Mase, Andrew Sears, Kristiina Jokinen, Björn W. Schuller:
2013 International Conference on Multimodal Interaction, ICMI '13, Sydney, NSW, Australia, December 9-13, 2013. ACM 2013, ISBN 978-1-4503-2129-7
Keynote 1
- James M. Rehg
:
Behavior imaging and the study of autism. 1-2
Oral session 1: personality
- Subramanian Ramanathan
, Yan Yan, Jacopo Staiano
, Oswald Lanz
, Nicu Sebe
:
On the relationship between head pose, social attention and personality prediction for unstructured and dynamic group interactions. 3-10 - Oya Aran
, Daniel Gatica-Perez
:
One of a kind: inferring personality impressions in meetings. 11-18 - Gelareh Mohammadi
, Sunghyun Park, Kenji Sagae, Alessandro Vinciarelli
, Louis-Philippe Morency:
Who is persuasive?: the role of perceived personality and communication modality in social multimedia. 19-26 - Kyriaki Kalimeri
, Bruno Lepri, Fabio Pianesi:
Going beyond traits: multimodal classification of personality states in the wild. 27-34
Oral session 2: communication
- Yukiko I. Nakano, Naoya Baba, Hung-Hsuan Huang, Yuki Hayashi:
Implementation and evaluation of a multimodal addressee identification mechanism for multiparty conversation systems. 35-42 - Iolanda Leite
, Hannaneh Hajishirzi, Sean Andrist, Jill Fain Lehman:
Managing chaos: models of turn-taking in character-multichild interactions. 43-50 - Iwan de Kok, Dirk Heylen, Louis-Philippe Morency:
Speaker-adaptive multimodal prediction model for listener responses. 51-58 - Jussi Rantala
, Sebastian Müller, Roope Raisamo
, Katja Suhonen, Kaisa Väänänen-Vainio-Mattila
, Vuokko Lantz:
User experiences of mobile audio conferencing with spatial audio, haptics and gestures. 59-66
Demo session 1
- Anne Loomis Thompson, Dan Bohus:
A framework for multimodal data collection, visualization, annotation and learning. 67-68 - Philip R. Cohen
, M. Cecelia Buchanan, Edward C. Kaiser, Michael J. Corrigan, Scott Lind, Matt Wesson:
Demonstration of sketch-thru-plan: a multimodal interface for command and control. 69-70 - Jacqueline M. Kory, Sooyeon Jeong, Cynthia Breazeal:
Robotic learning companions for early language development. 71-72 - Graham Wilcock
, Kristiina Jokinen
:
WikiTalk human-robot interactions. 73-74
Poster session 1
- Peng Liu, Michael Reale
, Xing Zhang, Lijun Yin:
Saliency-guided 3D head pose estimation on 3D expression models. 75-78 - Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Masafumi Matsuda, Junji Yamato
:
Predicting next speaker and timing from gaze transition patterns in multi-party meetings. 79-86 - Kenneth Alberto Funes Mora, Laurent Son Nguyen, Daniel Gatica-Perez
, Jean-Marc Odobez
:
A semi-automated system for accurate gaze coding in natural dyadic interactions. 87-90 - Nanxiang Li, Carlos Busso
:
Evaluating the robustness of an appearance-based gaze estimation method for multimodal interfaces. 91-98 - Catharine Oertel, Giampiero Salvi
:
A gaze-based method for relating group involvement to individual engagement in multimodal multiparty dialogue. 99-106 - Samira Sheikhi, Vasil Khalidov, David Klotz, Britta Wrede, Jean-Marc Odobez
:
Leveraging the robot dialog state for visual focus of attention recognition. 107-110 - Davide Maria Calandra, Antonio Caso, Francesco Cutugno
, Antonio Origlia
, Silvia Rossi
:
CoWME: a general framework to evaluate cognitive workload during multimodal interaction. 111-118 - Joan-Isaac Biel, Vagia Tsiminaki, John Dines, Daniel Gatica-Perez
:
Hi YouTube!: personality impressions and verbal content in social video. 119-126 - Oya Aran
, Daniel Gatica-Perez
:
Cross-domain personality prediction: from video blogs to small group meetings. 127-130 - Rada Mihalcea, Verónica Pérez-Rosas, Mihai Burzo:
Automatic detection of deceit in verbal communication. 131-134 - Stefan Scherer, Giota Stratou, Louis-Philippe Morency:
Audiovisual behavior descriptors for depression assessment. 135-140 - Young Chol Song, Henry A. Kautz
, James F. Allen, Mary D. Swift, Yuncheng Li, Jiebo Luo
, Ce Zhang:
A Markov logic framework for recognizing complex events from multimodal data. 141-148 - Chreston A. Miller
, Francis K. H. Quek, Louis-Philippe Morency:
Interactive relevance search and modeling: support for expert-driven analysis of multimodal data. 149-156 - Alexander Neumann
, Christian Schnier, Thomas Hermann
, Karola Pitsch:
Interaction analysis and joint attention tracking in augmented reality. 165-172 - Julie R. Williamson, Stephen A. Brewster
, Rama Vennelakanti:
Mo!Games: evaluating mobile gestures in the wild. 173-180 - Benjamin Inden, Zofia Malisz
, Petra Wagner
, Ipke Wachsmuth
:
Timing and entrainment of multimodal backchanneling behavior for an embodied conversational agent. 181-188 - David Antonio Gómez Jáuregui
, Léonor Philip, Céline Clavel, Stéphane Padovani, Mahin Bailly, Jean-Claude Martin:
Video analysis of approach-avoidance behaviors of teenagers speaking with virtual agents. 189-196 - Lorenzo Lucignano, Francesco Cutugno
, Silvia Rossi
, Alberto Finzi
:
A dialogue system for multimodal human-robot interaction. 197-204 - Qasem T. Obeidat
, Tom A. Campbell
, Jun Kong:
The zigzag paradigm: a new P300-based brain computer interface. 205-212 - Lode Hoste, Beat Signer
:
SpeeG2: a speech- and gesture-based interface for efficient controller-free text input. 213-220
Oral session 3: intelligent & multimodal interfaces
- Sharon L. Oviatt:
Interfaces for thinkers: computer input capabilities that support inferential reasoning. 221-228 - Antti Ajanki, Markus Koskela, Jorma Laaksonen
, Samuel Kaski:
Adaptive timeline interface to personal history data. 229-236 - Yale Song, Louis-Philippe Morency, Randall Davis:
Learning a sparse codebook of facial and body microexpressions for emotion recognition. 237-244
Keynote 2
- Stefan Kopp
:
Giving interaction a hand: deep models of co-speech gesture in multimodal systems. 245-246
Oral session 4: embodied interfaces
- Daniel Tetteroo
, Iris Soute, Panos Markopoulos:
Five key challenges in end-user development for tangible and embodied interaction. 247-254 - Mary Ellen Foster
, Andre Gaschler, Manuel Giuliani
:
How can i help you': comparing engagement classification strategies for a robot bartender. 255-262 - Manuel Giuliani
, Ronald P. A. Petrick, Mary Ellen Foster
, Andre Gaschler, Amy Isard, Maria Pateraki
, Markos Sigalas:
Comparing task-based and socially intelligent behaviour in a robot bartender. 263-270 - Imène Jraidi, Maher Chaouachi, Claude Frasson:
A dynamic multimodal approach for assessing learners' interaction experience. 271-278
Oral session 5: hand and body
- Radu-Daniel Vatavu
, Lisa Anthony
, Jacob O. Wobbrock:
Relative accuracy measures for stroke gestures. 279-286 - Xiang Xiao, Teng Han, Jingtao Wang:
LensGesture: augmenting mobile interactions with back-of-device finger gestures. 287-294 - Ryan Stedman, Michael A. Terry, Edward Lank:
Aiding human discovery of handwriting recognition errors. 295-302 - Shogo Okada
, Mayumi Bono, Katsuya Takanashi, Yasuyuki Sumi, Katsumi Nitta:
Context-based conversational hand gesture classification in narrative interaction. 303-310
Demo session 2
- Jong-uk Lee, Jeong-Mook Lim, Heesook Shin, Ki-Uk Kyung:
A haptic touchscreen interface for mobile devices. 311-312 - Laurence Devillers, Mariette Soury:
A social interaction system for studying humor with the Robot NAO. 313-314 - Aduén Darriba Frederiks, Dirk Heylen, Gijs Huisman:
TaSST: affective mediated touch. 315-316 - Omar Mubin
, Joshua Henderson, Christoph Bartneck
:
Talk ROILA to your Robot. 317-318 - Syaheerah Lebai Lutfi
, Fernando Fernández Martínez
, Jaime Lorenzo-Trueba
, Roberto Barra-Chicote, Juan Manuel Montero
:
NEMOHIFI: an affective HiFi agent. 319-320
Poster session 2: doctoral spotlight
- Sunghyun Park:
Persuasiveness in social multimedia: the role of communication modality and the challenge of crowdsourcing annotations. 321-324 - Kyriaki Kalimeri
:
Towards a dynamic view of personality: multimodal classification of personality states in everyday situations. 325-328 - Chien-Ming Huang
:
Designing effective multimodal behaviors for robots: a data-driven perspective. 329-332 - Sean Andrist:
Controllable models of gaze behavior for virtual agents and humanlike robots. 333-336 - Jamy Li
:
The nature of the bots: how people respond to robots, virtual agents and humans as multimodal stimuli. 337-340 - Iván Gris Sepulveda:
Adaptive virtual rapport for embodied conversational agents. 341-344 - Kenneth Alberto Funes Mora:
3D head pose and gaze tracking and their application to diverse multimodal tasks. 345-348 - Catharine Oertel:
Towards developing a model for group involvement and individual engagement. 349-352 - Bin Liang
:
Gesture recognition using depth images. 353-356 - Erina Ishikawa:
Modeling semantic aspects of gaze behavior while catalog browsing. 357-360 - Shyam Sundar Rajagopalan
:
Computational behaviour modelling for autism diagnosis. 361-364
Grand challenge overviews
- Sergio Escalera
, Jordi Gonzàlez
, Xavier Baró
, Miguel Reyes
, Isabelle Guyon, Vassilis Athitsos, Hugo Jair Escalante, Leonid Sigal, Antonis A. Argyros
, Cristian Sminchisescu, Richard Bowden
, Stan Sclaroff:
ChaLearn multi-modal gesture recognition 2013: grand challenge and workshop summary. 365-368 - Abhinav Dhall, Roland Goecke
, Jyoti Joshi
, Michael Wagner, Tom Gedeon:
Emotion recognition in the wild challenge (EmotiW) challenge and workshop summary. 371-372 - Louis-Philippe Morency, Sharon L. Oviatt, Stefan Scherer, Nadir Weibel
, Marcelo Worsley:
ICMI 2013 grand challenge workshop on multimodal learning analytics. 373-378
Keynote 3
- Mark Billinghurst
:
Hands and speech in space: multimodal interaction with augmented reality interfaces. 379-380
Oral session 6: AR, VR & mobile
- Klen Copic Pucihar, Paul Coulton, Jason Alexander
:
Evaluating dual-view perceptual issues in handheld augmented reality: device vs. user perspective rendering. 381-388 - Kazuhiro Otsuka, Shiro Kumano, Ryo Ishii, Maja Zbogar, Junji Yamato
:
MM+Space: n x 4 degree-of-freedom kinetic display for recreating multiparty conversation spaces. 389-396 - Reina Aramaki, Makoto Murakami
:
Investigating appropriate spatial relationship between user and ar character agent for communication using AR WoZ system. 397-404 - Trinh Minh Tri Do, Kyriaki Kalimeri
, Bruno Lepri, Fabio Pianesi, Daniel Gatica-Perez
:
Inferring social activities with mobile sensor networks. 405-412
Oral session 7: eyes & body
- Ichiro Umata, Seiichi Yamamoto, Koki Ijuin, Masafumi Nishida:
Effects of language proficiency on eye-gaze in second language conversations: toward supporting second language collaboration. 413-420 - Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama:
Predicting where we look from spatiotemporal gaps. 421-428 - Marwa Mahmoud, Louis-Philippe Morency, Peter Robinson:
Automatic multimodal descriptors of rhythmic body movement. 429-436 - Laurent Son Nguyen, Alvaro Marcos-Ramiro, Marta Marrón Romera
, Daniel Gatica-Perez
:
Multimodal analysis of body communication cues in employment interviews. 437-444
ChaLearn challenge and workshop on multi-modal gesture recognition
- Sergio Escalera
, Jordi Gonzàlez
, Xavier Baró
, Miguel Reyes
, Oscar Lopes, Isabelle Guyon, Vassilis Athitsos, Hugo Jair Escalante:
Multi-modal gesture recognition challenge 2013: dataset and results. 445-452 - Jiaxiang Wu, Jian Cheng, Chaoyang Zhao, Hanqing Lu:
Fusing multi-modal features for gesture recognition. 453-460 - Immanuel Bayer, Thierry Silbermann:
A multi modal approach to gesture recognition from audio and video data. 461-466 - Xi Chen, Markus Koskela:
Online RGB-D gesture recognition with extreme learning machines. 467-474 - Karthik Nandakumar, Kong-Wah Wan, Siu Man Alice Chan, Wen Zheng Terence Ng, Jian-Gang Wang, Wei-Yun Yau
:
A multi-modal gesture recognition system using audio, video, and skeletal joint data. 475-482 - Simon Ruffieux
, Denis Lalanne
, Elena Mugellini
:
ChAirGest: a challenge for multimodal mid-air gesture recognition for close HCI. 483-488 - Ying Yin, Randall Davis:
Gesture spotting and recognition using salience detection and concatenated hidden markov models. 489-494 - Víctor Ponce-López
, Sergio Escalera
, Xavier Baró
:
Multi-modal social signal analysis for predicting agreement in conversation settings. 495-502 - Jordi Abella, Raúl Alcaide, Anna Sabaté, Joan Mas, Sergio Escalera
, Jordi Gonzàlez
, Coen Antens:
Multi-modal descriptors for multi-class hand pose recognition in human computer interaction systems. 503-508
Emotion recognition in the wild challenge and workshop
- Abhinav Dhall, Roland Goecke
, Jyoti Joshi
, Michael Wagner, Tom Gedeon:
Emotion recognition in the wild challenge 2013. 509-516 - Karan Sikka, Karmen Dykstra, Suchitra Sathyanarayana, Gwen Littlewort, Marian Stewart Bartlett:
Multiple kernel learning for emotion recognition in the wild. 517-524 - Mengyi Liu, Ruiping Wang, Zhiwu Huang, Shiguang Shan
, Xilin Chen:
Partial least squares regression on grassmannian manifold for emotion recognition. 525-530 - Matthew Day:
Emotion recognition with boosted tree classifiers. 531-534 - Timur R. Almaev, Anil Yüce, Alexandru Ghitulescu, Michel François Valstar
:
Distribution-based iterative pairwise classification of emotions in the wild using LGBP-TOP. 535-542 - Samira Ebrahimi Kahou, Christopher J. Pal, Xavier Bouthillier, Pierre Froumenty, Çaglar Gülçehre
, Roland Memisevic, Pascal Vincent, Aaron C. Courville, Yoshua Bengio, Raul Chandias Ferrari, Mehdi Mirza, Sébastien Jean, Pierre Luc Carrier, Yann N. Dauphin, Nicolas Boulanger-Lewandowski, Abhishek Aggarwal, Jeremie Zumer
, Pascal Lamblin, Jean-Philippe Raymond, Guillaume Desjardins, Razvan Pascanu, David Warde-Farley, Atousa Torabi, Arjun Sharma, Emmanuel Bengio, Kishore Reddy Konda, Zhenzhou Wu:
Combining modality specific deep neural networks for emotion recognition in video. 543-550 - Sascha Meudt, Dimitri Zharkov, Markus Kächele, Friedhelm Schwenker:
Multi classifier systems and forward backward feature selection algorithms to classify emotional coloured speech. 551-556 - Tarun Krishna, Ayush K. Rai, Shubham Bansal, Shubham Khandelwal, Shubham Gupta
, Dushyant Goyal:
Emotion recognition using facial and audio features. 557-564
Multimodal learning analytics challenge
- Sharon L. Oviatt, Adrienne Cohen, Nadir Weibel
:
Multimodal learning analytics: description of math data corpus for ICMI grand challenge workshop. 563-568 - Sharon L. Oviatt:
Problem solving, domain expertise and learning: ground-truth performance results for math data corpus. 569-574 - Saturnino Luz
:
Automatic identification of experts and performance prediction in the multimodal math data corpus through analysis of speech interaction. 575-582 - Xavier Ochoa
, Katherine Chiluiza
, Gonzalo Méndez, Gonzalo Luzardo
, Bruno Guamán, James Castells:
Expertise estimation based on simple multimodal features. 583-590 - Kate Thompson
:
Using micro-patterns of speech to predict the correctness of answers to mathematics problems: an exercise in multimodal learning analytics. 591-598 - Sharon L. Oviatt, Adrienne Cohen:
Written and multimodal representations as predictors of expertise and problem-solving success in mathematics. 599-606
Workshop overview
- Kim Hartmann, Ronald Böck, Christian Becker-Asano, Jonathan Gratch, Björn W. Schuller
, Klaus R. Scherer:
ERM4HCI 2013: the 1st workshop on emotion representation and modelling in human-computer-interaction-systems. 607-608 - Roman Bednarik
, Hung-Hsuan Huang, Yukiko I. Nakano, Kristiina Jokinen
:
Gazein'13: the 6th workshop on eye gaze in intelligent human machine interaction: gaze in multimodal interaction. 609-610 - Manuel Kretzer
, Andrea Minuto, Anton Nijholt
:
Smart material interfaces: "another step to a material future". 611-612

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.