Остановите войну!
for scientists:
default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 33 matches
- 2019
- Burak Benligiray, Cihan Topal, Cuneyt Akinlar:
SliceType: fast gaze typing with a merging keyboard. J. Multimodal User Interfaces 13(4): 321-334 (2019) - Michael Braun, Nora Broy, Bastian Pfleging, Florian Alt:
Visualizing natural language interaction for conversational in-vehicle information systems to minimize driver distraction. J. Multimodal User Interfaces 13(2): 71-88 (2019) - Merijn Bruijnes, Jeroen Linssen, Dirk Heylen:
Special issue editorial: Virtual Agents for Social Skills Training. J. Multimodal User Interfaces 13(1): 1-2 (2019) - Nadia Elouali:
Time Well Spent with multimodal mobile interactions. J. Multimodal User Interfaces 13(4): 395-404 (2019) - Seyedeh Maryam Fakhrhosseini, Myounghoon Jeon:
How do angry drivers respond to emotional music? A comprehensive perspective on assessing emotion. J. Multimodal User Interfaces 13(2): 137-150 (2019) - Jana Fank, Natalie Tara Richardson, Frank Diermeyer:
Anthropomorphising driver-truck interaction: a study on the current state of research and the introduction of two innovative concepts. J. Multimodal User Interfaces 13(2): 99-117 (2019) - Emma Frid, Ludvig Elblaus, Roberto Bresin:
Interactive sonification of a fluid dance movement: an exploratory study. J. Multimodal User Interfaces 13(3): 181-189 (2019) - Emma Frid, Jonas Moll, Roberto Bresin, Eva-Lotta Sallnäs Pysander:
Haptic feedback combined with movement sonification using a friction sound improves task performance in a virtual throwing task. J. Multimodal User Interfaces 13(4): 279-290 (2019) - Emma Frid, Jonas Moll, Roberto Bresin, Eva-Lotta Sallnäs Pysander:
Correction to: Haptic feedback combined with movement sonification using a friction sound improves task performance in a virtual throwing task. J. Multimodal User Interfaces 13(4): 291 (2019) - Yousra Bendaly Hlaoui, Lamia Zouhaier, Leila Ben Ayed:
Model driven approach for adapting user interfaces to the context of accessibility: case of visually impaired users. J. Multimodal User Interfaces 13(4): 293-320 (2019) - Hsinfu Huang, Ta-chun Huang:
Thumb touch control range and usability factors of virtual keys for smartphone games. J. Multimodal User Interfaces 13(4): 267-278 (2019) - Jinghua Li, Huarui Huai, Junbin Gao, Dehui Kong, Lichun Wang:
Spatial-temporal dynamic hand gesture recognition via hybrid deep learning model. J. Multimodal User Interfaces 13(4): 363-371 (2019) - Andreas Löcken, Fei Yan, Wilko Heuten, Susanne Boll:
Investigating driver gaze behavior during lane changes using two visual cues: ambient light and focal icons. J. Multimodal User Interfaces 13(2): 119-136 (2019) - Valerio Lorenzoni, Pieter Van den Berghe, Pieter-Jan Maes, Tijl De Bie, Dirk De Clercq, Marc Leman:
Design and validation of an auditory biofeedback system for modification of running parameters. J. Multimodal User Interfaces 13(3): 167-180 (2019) - Pieter-Jan Maes, Valerio Lorenzoni, Joren Six:
The SoundBike: musical sonification strategies to enhance cyclists' spontaneous synchronization to external music. J. Multimodal User Interfaces 13(3): 155-166 (2019) - Yogesh Kumar Meena, Hubert Cecotti, KongFatt Wong-Lin, Girijesh Prasad:
Design and evaluation of a time adaptive multimodal virtual keyboard. J. Multimodal User Interfaces 13(4): 343-361 (2019) - Radoslaw Niewiadomski, Maurizio Mancini, Andrea Cera, Stefano Piana, Corrado Canepa, Antonio Camurri:
Does embodied training improve the recognition of mid-level expressive movement qualities sonification? J. Multimodal User Interfaces 13(3): 191-203 (2019) - Magalie Ochs, Daniel Mestre, Grégoire de Montcheuil, Jean-Marie Pergandi, Jorane Saubesty, Evelyne Lombardo, Daniel Francon, Philippe Blache:
Training doctors' social skills to break bad news: evaluation of the impact of virtual environment displays on the sense of presence. J. Multimodal User Interfaces 13(1): 41-51 (2019) - Antonio Polo, Xavier Sevillano:
Musical Vision: an interactive bio-inspired sonification tool to convert images into music. J. Multimodal User Interfaces 13(3): 231-243 (2019) - Samuel Recht, Ouriel Grynszpan:
The sense of social agency in gaze leading. J. Multimodal User Interfaces 13(1): 19-30 (2019) - Florian Roider, Sonja Rümelin, Bastian Pfleging, Tom Gross:
Investigating the effects of modality switches on driver distraction and interaction efficiency in the car. J. Multimodal User Interfaces 13(2): 89-97 (2019) - Niklas Rönnberg:
Sonification supports perception of brightness contrast. J. Multimodal User Interfaces 13(4): 373-381 (2019) - Kunhee Ryu, Joong-Jae Lee, Jung-Min Park:
GG Interaction: a gaze-grasp pose interaction for 3D virtual object selection. J. Multimodal User Interfaces 13(4): 383-393 (2019) - Dirk Schnelle-Walka, David R. McGee, Bastian Pfleging:
Multimodal interaction in automotive applications. J. Multimodal User Interfaces 13(2): 53-54 (2019) - Hari Singh, Jaswinder Singh:
Object acquisition and selection using automatic scanning and eye blinks in an HCI system. J. Multimodal User Interfaces 13(4): 405-417 (2019) - Piotr Skulimowski, Mateusz Owczarek, Andrzej Radecki, Michal Bujacz, Dariusz Rzeszotarski, Pawel Strumillo:
Interactive sonification of U-depth images in a navigation aid for the visually impaired. J. Multimodal User Interfaces 13(3): 219-230 (2019) - Hyoung Il Son:
The contribution of force feedback to human performance in the teleoperation of multiple unmanned aerial vehicles. J. Multimodal User Interfaces 13(4): 335-342 (2019) - Jason Sterkenburg, Steven Landry, Myounghoon Jeon:
Design and evaluation of auditory-supported air gesture controls in vehicles. J. Multimodal User Interfaces 13(2): 55-70 (2019) - Berglind Sveinbjörnsdóttir, Snorri Hjörvar Jóhannsson, Júlía Oddsdóttir, Tinna Þuríður Sigurðardóttir, Gunnar Ingi Valdimarsson, Hannes Högni Vilhjálmsson:
Virtual discrete trial training for teacher trainees. J. Multimodal User Interfaces 13(1): 31-40 (2019) - Kim Veltman, Harmen de Weerd, Rineke Verbrugge:
Training the use of theory of mind using artificial agents. J. Multimodal User Interfaces 13(1): 3-18 (2019)
skipping 3 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-06-04 00:18 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint