


default search action
20th ICMI 2018: Boulder, CO, USA
- Sidney K. D'Mello, Panayiotis G. Georgiou, Stefan Scherer, Emily Mower Provost, Mohammad Soleymani, Marcelo Worsley:

Proceedings of the 2018 on International Conference on Multimodal Interaction, ICMI 2018, Boulder, CO, USA, October 16-20, 2018. ACM 2018
Keynote & Invited Talks
- Shrikanth S. Narayanan:

A Multimodal Approach to Understanding Human Vocal Expressions and Beyond. 1 - Mary Czerwinski:

Using Technology for Health and Wellbeing. 2 - Paula M. Niedenthal:

Reinforcing, Reassuring, and Roasting: The Forms and Functions of the Human Smile. 3 - James L. Crowley:

Put That There: 20 Years of Research on Multimodal Interaction. 4
Session 1: Multiparty Interaction
- Setareh Nasihati Gilani, David R. Traum, Arcangelo Merla, Eugenia Hee, Zoey Walker, Barbara Manini

, Grady Gallagher, Laura-Ann Petitto:
Multimodal Dialogue Management for Multiparty Interaction with Infants. 5-13 - Gabriel Murray, Catharine Oertel:

Predicting Group Performance in Task-Based Interaction. 14-20 - Angela E. B. Stewart, Zachary A. Keirn, Sidney K. D'Mello:

Multimodal Modeling of Coordination and Coregulation Patterns in Speech Rate during Triadic Collaborative Problem Solving. 21-30 - Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Ryuichiro Higashinaka, Junji Tomita:

Analyzing Gaze Behavior and Dialogue Act during Turn-taking for Estimating Empathy Skill Level. 31-39
Session 2: Physiological Modeling
- Jeffrey F. Cohn, László A. Jeni, Itir Önal Ertugrul, Donald Malone, Michael S. Okun

, David A. Borton, Wayne K. Goodman:
Automated Affect Detection in Deep Brain Stimulation for Obsessive-Compulsive Disorder: A Pilot Study. 40-44 - Emanuela Maggioni

, Robert Cobden
, Dmitrijs Dmitrenko, Marianna Obrist
:
Smell-O-Message: Integration of Olfactory Notifications into a Messaging Application to Improve Users' Performance. 45-54 - Gao-Yi Chao, Chun-Min Chang, Jeng-Lin Li, Ya-Tse Wu, Chi-Chun Lee

:
Generating fMRI-Enriched Acoustic Vectors using a Cross-Modality Adversarial Network for Emotion Recognition. 55-62 - Phuong Pham, Jingtao Wang:

Adaptive Review for Mobile MOOC Learning via Multimodal Physiological Signal Sensing - A Longitudinal Study. 63-72 - Katri Salminen

, Jussi Rantala
, Poika Isokoski, Marko Lehtonen, Philipp Müller
, Markus Karjalainen, Jari Väliaho
, Anton Kontunen
, Ville Nieminen, Joni Leivo, Anca A. Telembeci, Jukka Lekkala, Pasi Kallio
, Veikko Surakka:
Olfactory Display Prototype for Presenting and Sensing Authentic and Synthetic Odors. 73-77
Session 3: Sound and Interaction
- Divesh Lala, Koji Inoue, Tatsuya Kawahara

:
Evaluation of Real-time Deep Learning Turn-taking Models for Multiple Dialogue Scenarios. 78-86 - Sharon L. Oviatt:

Ten Opportunities and Challenges for Advancing Student-Centered Multimodal Learning Analytics. 87-94 - Michael Bonfert

, Maximilian Spliethöver
, Roman Arzaroli, Marvin Lange, Martin Hanci, Robert Porzel:
If You Ask Nicely: A Digital Assistant Rebuking Impolite Voice Commands. 95-102 - Caroline Langlet, Chloé Clavel:

Detecting User's Likes and Dislikes for a Virtual Negotiating Agent. 103-110 - George Sterpu, Christian Saam, Naomi Harte

:
Attention-based Audio-Visual Fusion for Robust Automatic Speech Recognition. 111-115
Session 4: Touch and Gesture
- Sophie Skach

, Rebecca Stewart, Patrick G. T. Healey
:
Smart Arse: Posture Classification with Textile Sensors in Trousers. 116-124 - Jean Vanderdonckt, Paolo Roselli

, Jorge Luis Pérez-Medina
:
!FTL, an Articulation-Invariant Stroke Gesture Recognizer with Controllable Position, Scale, and Rotation Invariances. 125-134 - Ilhan Aslan, Tabea Schmidt, Jens Woehrle, Lukas Vogel

, Elisabeth André
:
Pen + Mid-Air Gestures: Eliciting Contextual Gestures. 135-144 - Benjamin Hatscher, Christian Hansen

:
Hand, Foot or Voice: Alternative Input Modalities for Touchless Interaction in the Medical Domain. 145-153
Session 5: Human Behavior
- Klaus Weber

, Hannes Ritschel
, Ilhan Aslan, Florian Lingenfelser, Elisabeth André
:
How to Shape the Humor of a Robot - Social Behavior Adaptation Based on Reinforcement Learning. 154-162 - Yun-Shao Lin, Chi-Chun Lee:

Using Interlocutor-Modulated Attention BLSTM to Predict Personality Traits in Small Group Interaction. 163-169 - Alexandria K. Vail

, Elizabeth S. Liebson, Justin T. Baker, Louis-Philippe Morency:
Toward Objective, Multifaceted Characterization of Psychotic Disorders: Lexical, Structural, and Disfluency Markers of Spoken Language. 170-178 - Victor Ardulov, Madelyn Mendlen

, Manoj Kumar, Neha Anand, Shanna Williams, Thomas D. Lyon
, Shrikanth S. Narayanan:
Multimodal Interaction Modeling of Child Forensic Interviewing. 179-185 - Matthew Roddy, Gabriel Skantze

, Naomi Harte
:
Multimodal Continuous Turn-Taking Prediction Using Multiscale RNNs. 186-190
Session 6: Artificial Agents
- Kazuhiro Otsuka, Keisuke Kasuga, Martina Köhler:

Estimating Visual Focus of Attention in Multiparty Meetings using Deep Convolutional Neural Networks. 191-199 - Jan Ondras

, Hatice Gunes:
Detecting Deception and Suspicion in Dyadic Game Interactions. 200-209 - Abhinav Shukla, Harish Katti, Mohan S. Kankanhalli

, Ramanathan Subramanian
:
Looking Beyond a Clever Narrative: Visual Context and Attention are Primary Drivers of Affect in Video Advertisements. 210-219 - Reshmashree B. Kantharaju, Fabien Ringeval, Laurent Besacier:

Automatic Recognition of Affective Laughter in Spontaneous Dyadic Interactions from Audiovisual Signals. 220-228 - Aditya Gujral, Theodora Chaspari, Adela C. Timmons

, Yehsong Kim, Sarah Barrett, Gayla Margolin:
Population-specific Detection of Couples' Interpersonal Conflict using Multi-task Learning. 229-233
Poster Session 1
- Dmitrijs Dmitrenko, Emanuela Maggioni

, Marianna Obrist:
I Smell Trouble: Using Multiple Scents To Convey Driving-Relevant Information. 234-238 - Shao-Yen Tseng, Haoqi Li, Brian R. Baucom

, Panayiotis G. Georgiou:
"Honey, I Learned to Talk": Multimodal Fusion for Behavior Analysis. 239-243 - Shraddha Pandya, Yasmine N. El-Glaly

:
TapTag: Assistive Gestural Interactions in Social Media on Touchscreens for Older Adults. 244-252 - Ilhan Aslan, Michael Dietz

, Elisabeth André
:
Gazeover - Exploring the UX of Gaze-triggered Affordance Communication for GUI Elements. 253-257 - Felix Putze, Dennis Küster

, Sonja Annerer-Walcher
, Mathias Benedek:
Dozing Off or Thinking Hard?: Classifying Multi-dimensional Attentional States in the Classroom from Video. 258-262 - Oludamilare Matthews, Markel Vigo

, Simon Harper
:
Sensing Arousal and Focal Attention During Visual Interaction. 263-267 - Almoctar Hassoumi, Pourang Irani, Vsevolod Peysakhovich, Christophe Hurter

:
Path Word: A Multimodal Password Entry Method for Ad-hoc Authentication Based on Digits' Shape and Smooth Pursuit Eye Movements. 268-277 - Wei Guo, Jingtao Wang:

Towards Attentive Speed Reading on Small Screen Wearable Devices. 278-287 - Wei Guo, Jingtao Wang:

Understanding Mobile Reading via Camera Based Gaze Tracking and Kinematic Touch Modeling. 288-297 - Yu-Sian Jiang, Garrett Warnell, Peter Stone:

Inferring User Intention using Gaze in Vehicles. 298-306 - Pedro Figueiredo, Manuel J. Fonseca

:
EyeLinks: A Gaze-Only Click Alternative for Heterogeneous Clickables. 307-314 - Maneesh Bilalpur, Mohan S. Kankanhalli, Stefan Winkler, Ramanathan Subramanian:

EEG-based Evaluation of Cognitive Workload Induced by Acoustic Parameters for Data Sonification. 315-323 - Adria Mallol-Ragolta, Svati Dhamija, Terrance E. Boult:

A Multimodal Approach for Predicting Changes in PTSD Symptom Severity. 324-333 - Ichiro Umata, Koki Ijuin, Tsuneo Kato, Seiichi Yamamoto:

Floor Apportionment and Mutual Gazes in Native and Second-Language Conversation. 334-341 - Satoshi Tsutsui, Sven Bambach, David J. Crandall

, Chen Yu
:
Estimating Head Motion from Egocentric Vision. 342-346 - Indrani Bhattacharya

, Michael Foley, Ni Zhang, Tongtao Zhang, Christine Ku, Cameron Mine, Heng Ji, Christoph Riedl, Brooke Foucault Welles, Richard J. Radke
:
A Multimodal-Sensor-Enabled Room for Unobtrusive Group Meeting Analysis. 347-355 - Chanuwas Aswamenakul, Lixing Liu, Kate B. Carey

, Joshua Woolley, Stefan Scherer, Brian Borsari:
Multimodal Analysis of Client Behavioral Change Coding in Motivational Interviewing. 356-360 - Hai Xuan Pham, Yuting Wang, Vladimir Pavlovic

:
End-to-end Learning for 3D Facial Animation from Speech. 361-365
Poster Session 2
- Ehab Albadawy, Yelin Kim:

Joint Discrete and Continuous Emotion Prediction Using Ensemble and End-to-End Approaches. 366-375 - Iulia Lefter, Siska Fitrianie

:
The Multimodal Dataset of Negative Affect and Aggression: A Validation Study. 376-383 - David A. Robb

, Francisco Javier Chiyah Garcia
, Atanas Laskov, Xingkun Liu, Pedro Patrón, Helen F. Hastie:
Keep Me in the Loop: Increasing Operator Situation Awareness through a Conversational Multimodal Interface. 384-392 - Md. Nazmus Sahadat, Nordine Sebkhi, Maysam Ghovanloo:

Simultaneous Multimodal Access to Wheelchair and Computer for People with Tetraplegia. 393-399 - Philip Schmidt, Attila Reiss, Robert Dürichen, Claus Marberger, Kristof Van Laerhoven

:
Introducing WESAD, a Multimodal Dataset for Wearable Stress and Affect Detection. 400-408 - Hisato Fukuda, Keiichi Yamazaki, Akiko Yamazaki, Yosuke Saito, Emi Iiyama, Seiji Yamazaki, Yoshinori Kobayashi, Yoshinori Kuno

, Keiko Ikeda
:
Enhancing Multiparty Cooperative Movements: A Robotic Wheelchair that Assists in Predicting Next Actions. 409-417 - Krishna Somandepalli, Victor R. Martinez, Naveen Kumar, Shrikanth S. Narayanan:

Multimodal Representation of Advertisements Using Segment-level Autoencoders. 418-422 - Ilaria Torre

, Emma Carrigan, Killian McCabe, Rachel McDonnell
, Naomi Harte
:
Survival at the Museum: A Cooperation Experiment with Emotionally Expressive Virtual Characters. 423-427 - Leshao Zhang, Patrick G. T. Healey:

Human, Chameleon or Nodding Dog? 428-436 - Yuchi Huang, Saad M. Khan:

A Generative Approach for Dynamically Varying Photorealistic Facial Expressions in Human-Agent Interactions. 437-445 - Philipp Mock, Maike Tibus, Ann-Christine Ehlis, R. Harald Baayen

, Peter Gerjets:
Predicting ADHD Risk from Touch Interaction Data. 446-454 - Annika Muehlbradt

, Madhur Atreya, Darren Guinness, Shaun K. Kane:
Exploring the Design of Audio-Kinetic Graphics for Education. 455-463 - Ying-Chao Tung, Mayank Goel, Isaac Zinda, Jacob O. Wobbrock:

RainCheck: Overcoming Capacitive Interference Caused by Rainwater on Smartphones. 464-471 - Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency:

Multimodal Local-Global Ranking Fusion for Emotion Recognition. 472-476 - Daniel Prendergast, Daniel Szafir

:
Improving Object Disambiguation from Natural Language using Empirical Models. 477-485 - Bukun Son, Jaeyoung Park:

Tactile Sensitivity to Distributed Patterns in a Palm. 486-491 - Hiroki Tanaka

, Hideki Negoro, Hidemi Iwasaka, Satoshi Nakamura:
Listening Skills Assessment through Computer Agents. 492-496
Doctoral Consortium (alphabetically by author's last name)
- Sedeeq Al-khazraji:

Using Data-Driven Approach for Modeling Timing Parameters of American Sign Language. 497-500 - Indrani Bhattacharya:

Unobtrusive Analysis of Group Interactions without Cameras. 501-505 - Damien Brun:

Multimodal and Context-Aware Interaction in Augmented Reality for Active Assistance. 506-510 - Hamid Karimi:

Interpretable Multimodal Deception Detection in Videos. 511-515 - Amanjot Kaur:

Attention Network for Engagement Prediction in the Wild. 516-519 - Taras Kucherenko

:
Data Driven Non-Verbal Behavior Generation for Humanoid Robots. 520-523 - S. M. al Mahi:

Multi-Modal Multi sensor Interaction between Human andHeterogeneous Multi-Robot System. 524-528 - Anindita Nath:

Responding with Sentiment Appropriate for the User's Current Sentiment in Dialog as Inferred from Prosody and Gaze Patterns. 529-533 - Sophie Skach

:
Strike A Pose: Capturing Non-Verbal Behaviour with Textile Sensors. 534-537 - George Sterpu:

Large Vocabulary Continuous Audio-Visual Speech Recognition. 538-541 - Chinchu Thomas:

Multimodal Teaching and Learning Analytics for Classroom and Online Educational Settings. 542-545 - Özge Nilay Yalçin

:
Modeling Empathy in Embodied Conversational Agents: Extended Abstract. 546-550
Demo and Exhibit Session
- Niklas Rach, Klaus Weber

, Louisa Pragst, Elisabeth André
, Wolfgang Minker, Stefan Ultes:
EVA: A Multimodal Argumentative Dialogue System. 551-552 - Cheng Zhang, Cheng Chang, Lei Chen, Yang Liu:

Online Privacy-Safe Engagement Tracking System. 553-554 - Daniel M. Lofaro, Donald Sofge:

Multimodal Control of Lighter-Than-Air Agents. 555-556 - Helen F. Hastie, Francisco Javier Chiyah Garcia

, David A. Robb
, Atanas Laskov, Pedro Patrón:
MIRIAM: A Multimodal Interface for Explaining the Reasoning Behind Actions of Remote Autonomous Systems. 557-558
EAT Grand Challenge
- Simone Hantke, Maximilian Schmitt, Panagiotis Tzirakis, Björn W. Schuller

:
EAT -: The ICMI 2018 Eating Analysis and Tracking Challenge. 559-563 - Fasih Haider

, Senja Pollak
, Eleni Zarogianni, Saturnino Luz:
SAAMEAT: Active Feature Transformation and Selection Methods for the Recognition of User Eating Conditions. 564-568 - Ya'nan Guo, Jing Han, Zixing Zhang, Björn W. Schuller, Yide Ma:

Exploring A New Method for Food Likability Rating Based on DT-CWT Theory. 569-573 - Benjamin Sertolli, Nicholas Cummins

, Abdulkadir Sengür
, Björn W. Schuller
:
Deep End-to-End Representation Learning for Food Type Recognition from Speech. 574-578 - Dara Pir:

Functional-Based Acoustic Group Feature Selection for Automatic Recognition of Eating Condition. 579-583
EmotiW Grand Challenge
- Yingruo Fan, Jacqueline C. K. Lam, Victor O. K. Li:

Video-based Emotion Recognition Using Deeply-Supervised Neural Networks. 584-588 - Valentin Vielzeuf, Corentin Kervadec, Stéphane Pateux, Alexis Lechervy

, Frédéric Jurie:
An Occam's Razor View on Learning Audiovisual Emotion Recognition with Small Training Sets. 589-593 - Jianfei Yang, Kai Wang, Xiaojiang Peng, Yu Qiao

:
Deep Recurrent Multi-instance Learning with Spatio-temporal Features for Engagement Intensity Prediction. 594-598 - Xuesong Niu, Hu Han

, Jiabei Zeng, Xuran Sun, Shiguang Shan
, Yan Huang, Songfan Yang, Xilin Chen:
Automatic Engagement Prediction with GAP Feature. 599-603 - Chinchu Thomas, Nitin Nair, Dinesh Babu Jayagopi:

Predicting Engagement Intensity in the Wild Using Temporal Convolutional Network. 604-610 - Aarush Gupta, Dakshit Agrawal, Hardik Chauhan, Jose Dolz, Marco Pedersoli:

An Attention Model for Group-Level Emotion Recognition. 611-615 - Cheng Chang, Cheng Zhang, Lei Chen, Yang Liu:

An Ensemble Model Using Face and Body Tracking for Engagement Detection. 616-622 - Ahmed-Shehab Khan, Zhiyuan Li, Jie Cai, Zibo Meng, James O'Reilly, Yan Tong:

Group-Level Emotion Recognition using Deep Models with A Four-stream Hybrid Network. 623-629 - Chuanhe Liu, Tianhao Tang, Kui Lv, Minghao Wang:

Multi-Feature Based Emotion Recognition for Video Clips. 630-634 - Xin Guo, Bin Zhu, Luisa F. Polanía, Charles Boncelet, Kenneth E. Barner:

Group-Level Emotion Recognition Using Hybrid Deep Models Based on Faces, Scenes, Skeletons and Visual Attentions. 635-639 - Kai Wang, Xiaoxing Zeng, Jianfei Yang, Debin Meng, Kaipeng Zhang, Xiaojiang Peng, Yu Qiao:

Cascade Attention Networks For Group Emotion Recognition with Face, Body and Image Cues. 640-645 - Cheng Lu, Wenming Zheng, Chaolong Li, Chuangao Tang

, Suyuan Liu, Simeng Yan, Yuan Zong:
Multiple Spatio-temporal Feature Learning for Video-based Emotion Recognition in the Wild. 646-652 - Abhinav Dhall, Amanjot Kaur, Roland Goecke

, Tom Gedeon:
EmotiW 2018: Audio-Video, Student Engagement and Group-Level Affect Prediction. 653-656
Workshop Summaries
- Anton Nijholt

, Carlos Velasco
, Marianna Obrist
, Katsunori Okajima, Charles Spence:
3rd International Workshop on Multisensory Approaches to Human-Food Interaction. 657-659 - Gabriel Murray, Hayley Hung, Joann Keyton

, Catherine Lai, Nale Lehmann-Willenbrock
, Catharine Oertel:
Group Interaction Frontiers in Technology. 660-662 - Felix Putze, Jutta Hild, Akane Sano, Enkelejda Kasneci, Erin Solovey

, Tanja Schultz
:
Modeling Cognitive Processes from Multimodal Signals. 663 - Theodora Chaspari, Angeliki Metallinou, Leah I. Stein Duker, Amir H. Behzadan:

Human-Habitat for Health (H3): Human-habitat Multimodal Interaction for Promoting Health and Well-being in the Internet of Things Era. 664-665 - Ronald Böck, Francesca Bonin, Nick Campbell, Ronald Poppe:

International Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction (Workshop Summary). 666-667

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














