


default search action
16th ICMI 2014: Istanbul, Turkey
- Albert Ali Salah, Jeffrey F. Cohn, Björn W. Schuller, Oya Aran, Louis-Philippe Morency, Philip R. Cohen:

Proceedings of the 16th International Conference on Multimodal Interaction, ICMI 2014, Istanbul, Turkey, November 12-16, 2014. ACM 2014, ISBN 978-1-4503-2885-2
Keynote Address
- Yvonne Rogers:

Bursting our Digital Bubbles: Life Beyond the App. 1
Oral Session 1: Dialogue and Social Interaction
- Dan Bohus, Eric Horvitz:

Managing Human-Robot Engagement with Forecasts and... um... Hesitations. 2-9 - Sharon L. Oviatt, Adrienne Cohen:

Written Activity, Representations and Fluency as Predictors of Domain Expertise in Mathematics. 10-17 - Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato

:
Analysis of Respiration for Prediction of "Who Will Be Next Speaker and When?" in Multi-Party Meetings. 18-25 - Spyros Kousidis, Casey Kennington, Timo Baumann, Hendrik Buschmeier

, Stefan Kopp
, David Schlangen
:
A Multimodal In-Car Dialogue System That Tracks The Driver's Attention. 26-33
Oral Session 2: Multimodal Fusion
- Héctor Pérez Martínez, Georgios N. Yannakakis

:
Deep Multimodal Fusion: Combining Discrete Events and Continuous Signals. 34-41 - Joseph F. Grafsgaard, Joseph B. Wiggins, Alexandria Katarina Vail

, Kristy Elizabeth Boyer, Eric N. Wiebe
, James C. Lester:
The Additive Value of Multimodal Features for Predicting Engagement, Frustration, and Learning during Tutoring. 42-49 - Sunghyun Park, Han Suk Shim, Moitreya Chatterjee, Kenji Sagae, Louis-Philippe Morency:

Computational Analysis of Persuasiveness in Social Multimedia: A Novel Dataset and Multimodal Prediction Approach. 50-57 - Mohamed Abouelenien, Verónica Pérez-Rosas, Rada Mihalcea, Mihai Burzo:

Deception detection using a multimodal approach. 58-65
Demo Session 1
- Ferdinand Fuhrmann, Rene Kaiser

:
Multimodal Interaction for Future Control Centers: An Interactive Demonstrator. 66-67 - Stefano Piana, Alessandra Staglianò, Francesca Odone, Antonio Camurri

:
Emotional Charades. 68-69 - Chun-Yen Hsu, Ying-Chao Tung, Han-Yu Wang, Silvia Chyou, Jer-Wei Lin, Mike Y. Chen:

Glass Shooter: Exploring First-Person Shooter Game Control with Google Glass. 70-71 - Wolfgang Weiss

, Rene Kaiser
, Manolis Falelakis:
Orchestration for Group Videoconferencing: An Interactive Demonstrator. 72-73 - H. Emrah Tasli, Amogh Gudi

, Marten den Uyl:
Integrating Remote PPG in Facial Expression Analysis Framework. 74-75 - Vidyavisal Mangipudi, Raj Tumuluri:

Context-Aware Multimodal Robotic Health Assistant. 76-77 - Tirthankar Dasgupta, Manjira Sinha

, Gagan Kandra, Anupam Basu:
WebSanyog: A Portable Assistive Web Browser for People with Cerebral Palsy. 78-79 - Nicolas Riesterer, Christian Becker-Asano

, Julien Hué, Christian Dornhege, Bernhard Nebel
:
The hybrid Agent MARCO. 80-81 - Kuldeep Yadav, Kundan Shrivastava, Om Deshmukh:

Towards Supporting Non-linear Navigation in Educational Videos. 82-83
Poster Session 1
- Hayley Hung, Gwenn Englebienne, Laura Cabrera Quiros

:
Detecting conversing groups with a single worn accelerometer. 84-91 - Young-Ho Kim, Teruhisa Misu:

Identification of the Driver's Interest Point using a Head Pose Trajectory for Situated Dialog Systems. 92-95 - Taekbeom Yoo, Yongjae Yoo

, Seungmoon Choi:
An Explorative Study on Crossmodal Congruence Between Visual and Tactile Icons Based on Emotional Responses. 96-103 - Joseph G. Ellis, Brendan Jou, Shih-Fu Chang:

Why We Watch the News: A Dataset for Exploring Sentiment in Broadcast Video News. 104-111 - Stefan Scherer, Zakia Hammal, Ying Yang, Louis-Philippe Morency, Jeffrey F. Cohn:

Dyadic Behavior Analysis in Depression Severity Assessment Interviews. 112-119 - Merel M. Jung

, Ronald Poppe
, Mannes Poel, Dirk Heylen:
Touching the Void - Introducing CoST: Corpus of Social Touch. 120-127 - Gloria Zen, Enver Sangineto

, Elisa Ricci
, Nicu Sebe
:
Unsupervised Domain Adaptation for Personalized Facial Emotion Recognition. 128-135 - Fumio Nihei, Yukiko I. Nakano, Yuki Hayashi, Hung-Hsuan Huang, Shogo Okada

:
Predicting Influential Statements in Group Discussions using Speech and Head Motion Information. 136-143 - Malcolm Slaney

, Andreas Stolcke, Dilek Hakkani-Tür
:
The Relation of Eye Gaze and Face Pose: Potential Impact on Speech Recognition. 144-147 - Najmeh Sadoughi, Yang Liu, Carlos Busso

:
Speech-Driven Animation Constrained by Appropriate Discourse Functions. 148-155 - Martin Halvey

, Andrew Crossan:
Many Fingers Make Light Work: Non-Visual Capacitive Surface Exploration. 156-163 - Felix Schüssel, Frank Honold, Miriam Schmidt, Nikola Bubalo, Anke Huckauf, Michael Weber:

Multimodal Interaction History and its use in Error Detection and Recovery. 164-171 - Radu-Daniel Vatavu

, Lisa Anthony
, Jacob O. Wobbrock:
Gesture Heatmaps: Understanding Gesture Performance with Colorful Visualizations. 172-179 - Cristina Segalin

, Alessandro Perina, Marco Cristani:
Personal Aesthetics for Soft Biometrics: A Generative Multi-resolution Approach. 180-187 - Ronnie Taib, Benjamin Itzstein, Kun Yu:

Synchronising Physiological and Behavioural Sensors in a Driving Simulator. 188-195 - Henny Admoni

, Brian Scassellati
:
Data-Driven Model of Nonverbal Behavior for Socially Assistive Human-Robot Interactions. 196-199 - Lei Chen, Gary Feng, Jilliam Joe, Chee Wee Leong, Christopher Kitchen, Chong Min Lee:

Towards Automated Assessment of Public Speaking Skills Using Multimodal Cues. 200-203 - Matthias Wölfel

, Luigi Bucchino:
Increasing Customers' Attention using Implicit and Explicit Interaction in Urban Advertisement. 204-207 - Risa Suzuki, Shutaro Homma, Eri Matsuura, Ken-ichi Okada:

System for Presenting and Creating Smell Effects to Video. 208-215 - Andrew D. Wilson, Hrvoje Benko:

CrossMotion: Fusing Device and Image Motion for User Identification, Tracking and Device Association. 216-223 - Giorgio Roffo

, Cinzia Giorgetta, Roberta Ferrario, Walter Riviera, Marco Cristani:
Statistical Analysis of Personality and Identity in Chats Using a Keylogging Platform. 224-231 - Yosra Rekik

, Radu-Daniel Vatavu
, Laurent Grisoni
:
Understanding Users' Perceived Difficulty of Multi-Touch Gesture Articulation. 232-239 - Sayan Ghosh, Moitreya Chatterjee, Louis-Philippe Morency:

A Multimodal Context-based Approach for Distress Assessment. 240-246 - Gregor Mehlmann, Markus Häring, Kathrin Janowski

, Tobias Baur
, Patrick Gebhard, Elisabeth André
:
Exploring a Model of Gaze for Grounding in Multimodal HRI. 247-254 - Alexandria Katarina Vail

, Joseph F. Grafsgaard, Joseph B. Wiggins, James C. Lester, Kristy Elizabeth Boyer:
Predicting Learning and Engagement in Tutorial Dialogue: A Personality-Based Model. 255-262 - Dilek Hakkani-Tür, Malcolm Slaney

, Asli Celikyilmaz
, Larry P. Heck:
Eye Gaze for Spoken Language Understanding in Multi-modal Conversational Interactions. 263-266 - Koray Tahiroglu

, Thomas Svedström, Valtteri Wikström, Simon Overstall, Johan Kildal
, Teemu Tuomas Ahmaniemi:
SoundFLEX: Designing Audio to Guide Interactions with Shape-Retaining Deformable Interfaces. 267-274 - Felix Putze, Tanja Schultz

:
Investigating Intrusiveness of Workload Adaptation. 275-281
Keynote Address 2
- Cafer Tosun:

Smart Multimodal Interaction through Big Data. 282
Oral Session 3: Affect and Cognitive Modeling
- Tomislav Pejsa, Dan Bohus, Michael F. Cohen, Chit W. Saw, James Mahoney, Eric Horvitz:

Natural Communication about Uncertainties in Situated Interaction. 283-290 - Saskia Koldijk, Maya Sappelli, Suzan Verberne

, Mark A. Neerincx, Wessel Kraaij
:
The SWELL Knowledge Work Dataset for Stress and User Modeling Research. 291-298 - Radoslaw Niewiadomski

, Maurizio Mancini
, Yu Ding, Catherine Pelachaud, Gualtiero Volpe:
Rhythmic Body Movements of Laughter. 299-306 - Alvaro Marcos-Ramiro, Daniel Pizarro-Perez

, Marta Marrón Romera
, Daniel Gatica-Perez
:
Automatic Blinking Detection towards Stress Discovery. 307-310
Oral Session 4: Nonverbal Behaviors
- Ilhan Aslan, Andreas Uhl, Alexander Meschtscherjakov

, Manfred Tscheligi
:
Mid-air Authentication Gestures: An Exploration of Authentication Based on Palm and Finger Motions. 311-318 - Marwa Mahmoud, Tadas Baltrusaitis, Peter Robinson:

Automatic Detection of Naturalistic Hand-over-Face Gesture Descriptors. 319-326 - Alvaro Marcos-Ramiro, Daniel Pizarro-Perez

, Marta Marrón Romera
, Daniel Gatica-Perez
:
Capturing Upper Body Motion in Conversation: An Appearance Quasi-Invariant Approach. 327-334 - Nanxiang Li, Carlos Busso

:
User Independent Gaze Estimation by Exploiting Similarity Measures in the Eye Pair Appearance Eigenspace. 335-338
Doctoral Spotlight Session
- Julián Zapata

:
Exploring multimodality for translator-computer interaction. 339-343 - Merel M. Jung

:
Towards Social Touch Intelligence: Developing a Robust System for Automatic Touch Recognition. 344-348 - Karan Sikka:

Facial Expression Analysis for Estimating Pain in Clinical Settings. 349-353 - Takaaki Sugiyama:

Realizing Robust Human-Robot Interaction under Real Environments with Noises. 354-358 - Heysem Kaya

:
Speaker- and Corpus-Independent Methods for Affect Classification in Computational Paralinguistics. 359-363 - Ailbhe Finnerty:

The Impact of Changing Communication Practices. 364-368 - Hande Özgür Alemdar

:
Multi-Resident Human Behaviour Identification in Ambient Assisted Living Environments. 369-373 - Çagla Çig:

Gaze-Based Proactive User Interface for Pen-Based Systems. 374-378 - Nanxiang Li:

Appearance based user-independent gaze estimation. 379-383 - Andreza Sartori

:
Affective Analysis of Abstract Paintings Using Statistical Analysis and Art Theory. 384-388 - Julia Wache:

The Secret Language of Our Body: Affect and Personality Recognition Using Physiological Signals. 389-393 - Jeffrey M. Girard

:
Perceptions of Interpersonal Behavior are Influenced by Gender, Facial Expression Intensity, and Head Pose. 394-398 - Tomislav Pejsa:

Authoring Communicative Behaviors for Situated, Embodied Characters. 399-403 - Joseph F. Grafsgaard:

Multimodal Analysis and Modeling of Nonverbal Behaviors during Tutoring. 404-408
Keynote Address 3
- Peter Robinson:

Computation of Emotions. 409-410
Oral Session 5: Mobile and Urban Interaction
- Emily Fujimoto, Matthew Turk:

Non-Visual Navigation Using Combined Audio Music and Haptic Cues. 411-418 - Euan Freeman

, Stephen A. Brewster
, Vuokko Lantz:
Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions. 419-426 - Andrey Bogomolov, Bruno Lepri, Jacopo Staiano

, Nuria Oliver
, Fabio Pianesi, Alex Pentland:
Once Upon a Crime: Towards Crime Prediction from Demographics and Mobile Data. 427-434 - Philipp Tiefenbacher, Steven Wichert, Daniel Merget, Gerhard Rigoll:

Impact of Coordinate Systems on 3D Manipulations in Mobile Augmented Reality. 435-438
Oral Session 6: Healthcare and Assistive Technologies
- Yasmine N. El-Glaly

, Francis K. H. Quek:
Digital Reading Support for The Blind by Multimodal Interaction. 439-446 - Jonathan Bidwell, Irfan A. Essa, Agata Rozga, Gregory D. Abowd:

Measuring Child Visual Attention using Markerless Head Tracking from Color and Depth Sensing Cameras. 447-454 - Temitayo A. Olugbade, M. S. Hane Aung, Nadia Bianchi-Berthouze

, Nicolai Marquardt
, Amanda C. de C. Williams:
Bi-Modal Detection of Painful Reaching for Chronic Pain Rehabilitation Systems. 455-458
Keynote Address 4
- Alexander Waibel:

A World without Barriers: Connecting the World across Languages, Distances and Media. 459-460
The Second Emotion Recognition In The Wild Challenge
- Abhinav Dhall, Roland Goecke

, Jyoti Joshi
, Karan Sikka, Tom Gedeon:
Emotion Recognition In The Wild Challenge 2014: Baseline, Data and Protocol. 461-466 - Michal Grosicki:

Neural Networks for Emotion Recognition in the Wild. 467-472 - Fabien Ringeval, Shahin Amiriparian

, Florian Eyben, Klaus R. Scherer, Björn W. Schuller
:
Emotion Recognition in the Wild: Incorporating Voice and Lip Activity in Multimodal Decision-Level Fusion. 473-480 - Bo Sun, Liandong Li, Tian Zuo, Ying Chen, Guoyan Zhou, Xuewen Wu:

Combining Multimodal Features with Hierarchical Classifier Fusion for Emotion Recognition in the Wild. 481-486 - Heysem Kaya

, Albert Ali Salah
:
Combining Modality-Specific Extreme Learning Machines for Emotion Recognition in the Wild. 487-493 - Mengyi Liu, Ruiping Wang, Shaoxin Li, Shiguang Shan

, Zhiwu Huang, Xilin Chen:
Combining Multiple Kernel Methods on Riemannian Manifold for Emotion Recognition in the Wild. 494-501 - Sascha Meudt, Friedhelm Schwenker:

Enhanced Autocorrelation in Real World Emotion Recognition. 502-507 - JunKai Chen, Zenghai Chen, Zheru Chi

, Hong Fu
:
Emotion Recognition in the Wild with Feature Fusion and Multiple Kernel Learning. 508-513 - Xiaohua Huang

, Qiuhai He, Xiaopeng Hong, Guoying Zhao
, Matti Pietikäinen:
Improved Spatiotemporal Local Monogenic Binary Pattern for Emotion Recognition in The Wild. 514-520 - Maxim Sidorov, Wolfgang Minker:

Emotion Recognition in Real-world Conditions with Acoustic and Visual Features. 521-524
Workshop Overviews
- Kim Hartmann, Björn W. Schuller, Ronald Böck:

ERM4HCI 2014: The 2nd Workshop on Emotion Representation and Modelling in Human-Computer-Interaction-Systems. 525-526 - Hung-Hsuan Huang, Roman Bednarik

, Kristiina Jokinen
, Yukiko I. Nakano:
Gaze-in 2014: the 7th Workshop on Eye Gaze in Intelligent Human Machine Interaction. 527-528 - Oya Çeliktutan, Florian Eyben, Evangelos Sariyanidi, Hatice Gunes, Björn W. Schuller

:
MAPTRAITS 2014 - The First Audio/Visual Mapping Personality Traits Challenge - An Introduction: Perceived Personality and Social Dimensions. 529-530 - Xavier Ochoa

, Marcelo Worsley, Katherine Chiluiza
, Saturnino Luz
:
MLA'14: Third Multimodal Learning Analytics Workshop and Grand Challenges. 531-532 - Mary Ellen Foster, Manuel Giuliani, Ronald P. A. Petrick:

ICMI 2014 Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction. 533-534 - Dirk Heylen, Alessandro Vinciarelli

:
An Outline of Opportunities for Multimodal Research. 535-536 - Samer Al Moubayed, Dan Bohus, Anna Esposito

, Dirk Heylen, Maria Koutsombogera
, Harris Papageorgiou, Gabriel Skantze
:
UM3I 2014: International Workshop on Understanding and Modeling Multiparty, Multimodal Interactions. 537-538

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














