


default search action
ICMI 2024: San Jose, Costa Rica - Companion Publication
- Hayley Hung, Catharine Oertel, Mohammad Soleymani, Theodora Chaspari, Hamdi Dibeklioglu, Jainendra Shukla, Khiet P. Truong:
Companion Proceedings of the 26th International Conference on Multimodal Interaction, ICMI Companion 2024, San Jose, Costa Rica, November 4-8, 2024. ACM 2024, ISBN 979-8-4007-0463-5
Late Breaking Results
- Yuen C. Law
, Harrison Mendieta-Dávila
, Daniel García-Fallas
, Rogelio González-Quirós
, Mario Chacón-Rivas
:
User-Defined Interaction for Very Low-Cost Head-Mounted Displays. 1-5 - Pradip Pramanick
, Luca Raggioli
, Alessandra Rossi
, Silvia Rossi
:
Effects of Incoherence in Multimodal Explanations of Robot Failures. 6-10 - Surely Akiri
, Vasundhara Joshi
, Sanaz Taherzadeh
, Gary Williams
, Helena M. Mentis
, Andrea Kleinsmith
:
Design and Preliminary Evaluation of a Stress Reflection System for High-Stress Training Environments. 11-15 - Shigeharu Ono
, Noboru Ninomiya
, Hideaki Kanai
:
Haptic Feedback to Reduce Individual Differences in Corrective Actions for Skill Learning. 16-20 - Chengyu Fan
, Verónica Romero, Alexandra Paxton
, Tahiya Chowdhury
:
Towards Multimodality: Comparing Quantifications of Movement Coordination. 21-25 - Saba Nazir
, Mehrnoosh Sadrzadeh
:
The Potential of Multimodal Compositionality for Enhanced Recommendations through Sentiment Analysis. 26-30 - Alessandro G. Di Nuovo
, Adam Kay
:
Enhancing Autism Spectrum Disorder Screening: Implementation and Pilot Testing of a Robot-Assisted Digital Tool. 31-35 - Miuyin Yong Wong
, Kevin Valakuzhy
, Mustaque Ahamad
, Douglas M. Blough
, Fabian Monrose
:
Understanding LLMs Ability to Aid Malware Analysts in Bypassing Evasion Techniques. 36-40 - Dan Bohus
, Sean Andrist
, Yuwei Bao
, Eric Horvitz
, Ann Paradiso
:
"Is This It?": Towards Ecologically Valid Benchmarks for Situated Collaboration. 41-45 - Crystal Yang
, Paul Taele
:
An Audiotactile System for Accessible Graphs on a Coordinate Plane. 46-50 - Anoop K. Sinha
, Chinmay Kulkarni
, Alex Olwal
:
Levels of Multimodal Interaction. 51-55 - Emma Jane Pretty
, Renan Luigi Martins Guarese
, Haytham M. Fayek
, Fabio Zambetta
:
Comparing Subjective Measures of Workload in Video Game Play: Evaluating the Test-Retest Reliability of the VGDS and NASA-TLX. 56-60 - Sachin Pathiyan Cherumanal
, Falk Scholer
, Johanne R. Trippas
, Damiano Spina
:
Towards Investigating Biases in Spoken Conversational Search. 61-66 - Yukun Wang
, Masaki Ohno
, Takuji Narumi
, Young Ah Seong
:
Crossmodal Correspondences between Piquancy/Spiciness and Visual Shape. 67-71
Tutorials
- Bernd Dudzik
, José Vargas Quiros
:
The OpenVIMO Platform: A Tutorial on Building and Managing Large-scale Online Experiments involving Videoconferencing. 72-74
Demonstrations & Exhibits
- Joaquin Molto
, Jonathan Fields
, Ubbo Visser
, Christine L. Lisetti
:
An LLM-powered Socially Interactive Agent with Adaptive Facial Expressions for Conversing about Health. 75-77 - Palash Nandy
, Sigurdur O. Adalgeirsson
, Anoop K. Sinha
, Tanya Kraljic
, Mike Cleron
, Lei Shi
, Angad Singh
, Ashish Chaudhary
, Ashwin Ganti
, Chris Melancon, Shudi Zhang
, David Robishaw
, Horia Stefan Ciurdar, Justin Secor
, Kenneth Aleksander Robertsen
, Kirsten Climer
, Madison Le
, Mathangi Venkatesan
, Peggy Chi
, Peixin Li
, Peter F. McDermott
, Rachel Shim
, Selcen Onsan
, Shilp Vaishnav
, Stephanie Guamán
:
Bespoke: Using LLM agents to generate just-in-time interfaces by reasoning about user intent. 78-81 - Rakesh Chowdary Yarlagadda
, Pranjal Aggarwal
, Vaibhav Jamadagni
, Ghritachi Mahajani
, Pavan Kumar Malasani
, Ehsanul Haque Nirjhar
, Theodora Chaspari
:
An AI-Powered Interactive Interface to Enhance Accessibility of Interview Training for Military Veterans. 82-84 - Carolin Schindler
, Daiki Mayumi
, Yuki Matsuda
, Niklas Rach
, Keiichi Yasumoto
, Wolfgang Minker
:
ARCADE: An Augmented Reality Display Environment for Multimodal Interaction with Conversational Agents. 85-87 - Ryo Ishii
, Shin'ichiro Eitoku
, Shohei Matsuo
, Motohiro Makiguchi
, Ayami Hoshi
, Louis-Philippe Morency
:
Let's Dance Together! AI Dancers Can Dance to Your Favorite Music and Style. 88-90 - Hannes Kath
, Ilira Troshani
, Bengt Lüers
, Thiago S. Gouvêa
, Daniel Sonntag
:
Enhancing Biodiversity Monitoring: An Interactive Tool for Efficient Identification of Species in Large Bioacoustics Datasets. 91-93 - Chee Wee Leong
, Navaneeth Jawahar
, Vinay Basheerabad
, Torsten Wörtwein
, Andrew Emerson
, Guy Sivan
:
Combining Generative and Discriminative AI for High-Stakes Interview Practice. 94-96 - Metehan Doyran
, Albert Ali Salah
, Ronald Poppe
:
Human Contact Annotator: Annotating Physical Contact in Dyadic Interactions. 97-99
Workshop: First Multimodal Banquet: Exploring Innovative Technology for Commensality and Human-Food Interaction
- Aidan J. Beery
, Daniel W. Eastman
, Jake Enos
, William Richards
, Patrick J. Donnelly
:
Smart Compost Bin for Measurement of Consumer Food Waste. 100-107 - Mario O. Parra
, Jesús Favela
, Luís A. Castro
, Daniel Gatica-Perez
:
Towards Wine Tasting Activity Recognition for a Digital Sommelier. 108-112 - Lei Gao
, Yutaka Tokuda
, Shubhi Bansal
, Sriram Subramanian
:
Computational Gastronomy and Eating with Acoustophoresis. 113-116 - Kheder Yazgi
, Cigdem Beyan
, Maurizio Mancini
, Radoslaw Niewiadomski
:
Automatic Recognition of Commensal Activities in Co-located and Online settings. 117-121 - Albana Hoxha
, Hunter Fong
, Radoslaw Niewiadomski
:
Do We Need Artificial Dining Companions? Exploring Human Attitudes Toward Robots in Commensality Settings. 122-128 - Annika Capada
, Ryan Deculawan
, Lauren Garcia
, Sophia Oquias
, Ron Resurreccion
, Jocelynn Cu
, Merlin Suarez
:
Analyzing Emotion Impact of Mukbang Viewing Through Facial Expression Recognition using Support Vector Machine. 129-133 - Haeji Shin
, Christopher Dawes
, Jing Xue
, Marianna Obrist
:
How does red taste?: Exploring how colour-taste associations affect our experience of food In Real Life and Extended Reality. 134-137
Workshop: GENEA: Generation and Evaluation of Non-verbal Behaviour for Embodied Agents
- Louis Abel
, Vincent Colotte
, Slim Ouni
:
Towards interpretable co-speech gestures synthesis using STARGATE. 138-146 - Mickaëlla Grondin-Verdon
, Domitille Caillat
, Slim Ouni
:
Qualitative study of gesture annotation corpus : Challenges and perspectives. 147-155 - Axel Wiebe Werner
, Jonas Beskow
, Anna Deichler
:
Gesture Evaluation in Virtual Reality. 156-164 - Rodolfo Luis Tonoli
, Paula Dornhofer Paro Costa
, Leonardo Boulitreau de Menezes Martins Marques
, Lucas Hideki Ueda
:
Gesture Area Coverage to Assess Gesture Expressiveness and Human-Likeness. 165-169 - Johsac Isbac Gomez Sanchez
, Kevin Adier Inofuente Colque
, Leonardo Boulitreau de Menezes Martins Marques
, Paula Dornhofer Paro Costa
, Rodolfo Luis Tonoli
:
Benchmarking Speech-Driven Gesture Generation Models for Generalization to Unseen Voices and Noisy Environments. 170-174
Workshop: HumanEYEze: Eye Tracking for Multimodal Human-Centric Computing
- Eduardo Davalos
, Yike Zhang
, Ashwin T. S.
, Joyce Horn Fonteles
, Umesh Timalsina
, Gautam Biswas
:
3D Gaze Tracking for Studying Collaborative Interactions in Mixed-Reality Environments. 175-183 - Sharath C. Koorathota
, Nikolas Papadopoulos
, Jia Li Ma
, Shruti Kumar
, Xiaoxiao Sun
, Arunesh Mittal
, Patrick Adelman
, Paul Sajda
:
Gaze-Informed Vision Transformers: Predicting Driving Decisions Under Uncertainty. 184-194 - Omair Shahzad Bhatti
, Harshinee Sriram
, Abdulrahman Mohamed Selim
, Cristina Conati
, Michael Barz
, Daniel Sonntag
:
Detecting when Users Disagree with Generated Captions. 195-203 - Mohammadhossein Salari
, Roman Bednarik
:
Investigating the Impact of Illumination Change on the Accuracy of Head-Mounted Eye Trackers: A Protocol and Initial Results. 204-210
Workshop: Multimodal Co-Construction of Explanations with XAI
- Rui Pedro da Costa Porfírio, Pedro Albuquerque Santos
, Rui Neves Madeira
:
Enhancing Digital Agriculture with XAI: Case Studies on Tabular Data and Future Directions. 211-217 - Amit Singh
, Katharina J. Rohlfing
:
Coupling of Task and Partner Model: Investigating the Intra-Individual Variability in Gaze during Human-Robot Explanatory Dialogue. 218-224 - Milena Belosevic
, Hendrik Buschmeier
:
Quote to Explain: Using Multimodal Metalinguistic Markers to Explain Large Language Models' Understanding Capabilities. 225-227 - Youssef Mahmoud Youssef
, Teena Hassan
:
Towards Multimodal Co-Construction of Explanations for Robots: Combining Inductive Logic Programming and Large Language Models to Explain Robot Faults. 228-230

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.