default search action
Journal on Multimodal User Interfaces, Volume 17
Volume 17, Number 1, March 2023
- Natalia Sevcenko, Tobias Appel, Manuel Ninaus, Korbinian Moeller, Peter Gerjets:
Theory-based approach for assessing cognitive load during time-critical resource-managing human-computer interactions: an eye-tracking study. 1-19 - Miao Huang, Chien-Hsiung Chen:
The effects of olfactory cues as Interface notifications on a mobile phone. 21-32 - Lucas El Raghibi, Ange Pascal Muhoza, Jeanne Evrard, Hugo Ghazi, Grégoire van Oldeneel tot Oldenzeel, Victorien Sonneville, Benoît Macq, Renaud Ronsse:
Virtual reality can mediate the learning phase of upper limb prostheses supporting a better-informed selection process. 33-46
Volume 17, Number 2, June 2023
- Candy Olivia Mawalim, Shogo Okada, Yukiko I. Nakano, Masashi Unoki:
Personality trait estimation in group discussions using multimodal analysis and speaker embedding. 47-63 - Hiu Lam Yip, Karin Petrini:
Investigating the influence of agent modality and expression on agent-mediated fairness behaviours. 65-77 - Ali Abdulrazzaq Alsamarei, Bahar Sener:
Remote social touch framework: a way to communicate physical interactions across long distances. 79-104
Volume 17, Number 3, September 2023
- Sophie Dewil, Shterna Kuptchik, Mingxiao Liu, Sean Sanford, Troy Bradbury, Elena Davis, Amanda Clemente, Raviraj Nataraj:
The cognitive basis for virtual reality rehabilitation of upper-extremity motor function after neurotraumas. 105-120 - Özgür Tamer, Barbaros Kirisken, Tunca Köklü:
A low duration vibro-tactile representation of Braille characters. 121-135 - Elias Elmquist, Alexander Bock, Jonas Lundberg, Anders Ynnerman, Niklas Rönnberg:
SonAir: the design of a sonification of radar data for air traffic control. 137-149 - Guoxuan Ning, Brianna Grant, Bill Kapralos, Alvaro J. Uribe-Quevedo, K. C. Collins, Kamen Kanev, Adam Dubrowski:
Understanding virtual drilling perception using sound, and kinesthetic cues obtained with a mouse and keyboard. 151-163 - Guoxuan Ning, Brianna Grant, Bill Kapralos, Alvaro Uribe-Quevedo, K. C. Collins, Kamen Kanev, Adam Dubrowski:
Correction to: Understanding virtual drilling perception using sound, and kinesthetic cues obtained with a mouse and keyboard. 165 - Santiago Villarreal-Narvaez, Jorge Luis Pérez-Medina, Jean Vanderdonckt:
Exploring user-defined gestures for lingual and palatal interaction. 167-185 - Paula Castro Sánchez, Casey C. Bennett:
Facial expression recognition via transfer learning in cooperative game paradigms for enhanced social AI. 187-201 - Haram Choi, Joung-Huem Kwon, Sanghun Nam:
Research on the application of gaze visualization interface on virtual reality training systems. 203-211
Volume 17, Number 4, December 2023
- Tim Ziemer, Sara Lenzi, Niklas Rönnberg, Thomas Hermann, Roberto Bresin:
Introduction to the special issue on design and perception of interactive sonification. 213-214 - Jason Sterkenburg, Steven Landry, Seyedeh Maryam Fakhrhosseini, Myounghoon Jeon:
In-vehicle air gesture design: impacts of display modality and control orientation. 215-230 - Adrian Benigno Latupeirissa, Roberto Bresin:
PepperOSC: enabling interactive sonification of a robot's expressive movement. 231-239 - Adrian Benigno Latupeirissa, Roberto Bresin:
Correction to: PepperOSC: enabling interactive sonification of a robot's expressive movement. 241 - Simon Linke, Rolf Bader, Robert Mores:
Model-based sonification based on the impulse pattern formulation. 243-251 - Tim Ziemer:
Three-dimensional sonification as a surgical guidance tool. 253-262 - Mariana Seiça, Licínio Roque, Pedro Martins, F. Amílcar Cardoso:
An interdisciplinary journey towards an aesthetics of sonification experience. 263-284 - Joe Fitzpatrick, Flaithrí Neff:
Perceptually congruent sonification of auditory line charts. 285-300
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.