default search action
Journal on Multimodal User Interfaces, Volume 5
Volume 5, Numbers 1-2, March 2012
- Olga Sourina, Ling Li, Zhigeng Pan:
Emotion-based interaction. 1 - Zhiguo Shi, Junming Wei, Zhiliang Wang, Jun Tu, Qiao Zhang:
Affective transfer computing model based on attenuation emotion mechanism. 3-18 - Mingmin Zhang, Xiaojian Zhou, Nan Xiang, Yuyong He, Zhigeng Pan:
Expression sequences generator for synthetic emotion. 19-25 - Olga Sourina, Yisi Liu, Minh Khoa Nguyen:
Real-time EEG-based emotion recognition for music therapy. 27-35 - Kang Liu, Jörn Ostermann:
Evaluation of an image-based talking head with realistic facial expression and head motion. 37-44 - Yin-Leng Theng, Paye Aung:
Investigating effects of avatars on primary school children's affective responses to learning. 45-52 - Dirk Heylen, Betsy van Dijk, Anton Nijholt:
Robotic Rabbit Companions: amusing or a nuisance? 53-59 - Minghao Yang, Jianhua Tao, Kaihui Mu, Ya Li, Jianfeng Che:
A multimodal approach of generating 3D human-like talking agent. 61-68 - Ina Conradi:
Art at the edges of materiality. 69-75 - Shiwei Cheng, Ying Liu:
Eye-tracking based adaptive user interface: implicit human-computer interaction for preference indication. 77-84
Volume 5, Numbers 3-4, May 2012
- Roberto Bresin, Thomas Hermann, Andy Hunt:
Interactive sonification. 85-86 - Florian Grond, Thomas Hermann:
Singing function. 87-95 - Sam Ferguson, Kirsty A. Beilharz, Claudia A. Calò:
Navigation of interactive sonifications and visualisations of time-series data using multi-touch computing. 97-109 - Oussama Metatla, Nick Bryan-Kinns, Tony Stockman:
Interactive hierarchy-based auditory displays for accessing and manipulating relational diagrams. 111-122 - Steven R. Ness, Paul Reimer, Justin Love, W. Andrew Schloss, George Tzanetakis:
Sonophenology. 123-129 - Dalia El-Shimy, Florian Grond, Adriana Olmos, Jeremy R. Cooperstock:
Eyes-free environmental awareness for navigation. 131-141 - Gaël Dubus:
Evaluation of four models for the sonification of elite rowing. 143-156 - Giovanna Varni, Gaël Dubus, Sami Oksanen, Gualtiero Volpe, Marco Fabiani, Roberto Bresin, Jari Kleimola, Vesa Välimäki, Antonio Camurri:
Interactive sonification of synchronisation of motoric behaviour in social active listening to music with mobile devices. 157-173 - Patrick Susini, Nicolas Misdariis, Guillaume Lemaitre, Olivier Houix:
Naturalness influences the perceived usability and pleasantness of an interface's sonic feedback. 175-186 - Carlo Drioli, Davide Rocchesso:
Acoustic rendering of particle-based simulation of liquids in motion. 187-195 - Saskia Bakker, Elise van den Hoven, Berry Eggen:
Knowing by ear: leveraging human attention abilities in interaction design. 197-209 - Nuno Diniz, Pieter Coussement, Alexander Deweppe, Michiel Demey, Marc Leman:
An embodied music cognition approach to multilevel interactive sonification. 211-219
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.