


default search action
ETRA 2019: Denver , CO, USA
- Krzysztof Krejtz, Bonita Sharif:

Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications, ETRA 2019, Denver , CO, USA, June 25-28, 2019. ACM 2019, ISBN 978-1-4503-6709-7 - Justin Le Louedec, Thomas Guntz, James L. Crowley, Dominique Vaufreydaz

:
Deep learning investigation for chess player attention prediction using eye-tracking and game data. 1:1-1:9 - Reuben M. Aronson, Henny Admoni

:
Semantic gaze labeling for human-robot shared manipulation. 2:1-2:9 - Almoctar Hassoumi, Vsevolod Peysakhovich

, Christophe Hurter
:
EyeFlow: pursuit interactions using an unmodified camera. 3:1-3:10 - Jonas Goltz, Michael Grossberg, Ronak Etemadpour:

Exploring simple neural network architectures for eye movement classification. 4:1-4:5 - Islam Akef Ebeid

, Nilavra Bhattacharya
, Jacek Gwizdka
, Abhra Sarkar:
Analyzing gaze transition behavior using bayesian mixed effects Markov models. 5:1-5:5 - Ludwig Sidenmark

, Anders Lundström:
Gaze behaviour on interacted objects during hand interaction in virtual reality for eye tracking calibration. 6:1-6:9 - Heiko Drewes

, Ken Pfeuffer, Florian Alt
:
Time- and space-efficient eye tracker calibration. 7:1-7:8 - Jimin Pi, Bertram E. Shi

:
Task-embedded online eye-tracker calibration for improving robustness to head motion. 8:1-8:9 - Philipp Müller

, Daniel Buschek
, Michael Xuelin Huang, Andreas Bulling
:
Reducing calibration drift in mobile eye trackers by exploiting mobile phone usage. 9:1-9:9 - Dan Witzner Hansen, Amelie Heinrich, Rouwen Cañal-Bruland

:
Aiming for the quiet eye in biathlon. 10:1-10:7 - Nelson Silva

, Tanja Blascheck
, Radu Jianu, Nils Rodrigues
, Daniel Weiskopf
, Martin Raubal, Tobias Schreck:
Eye tracking support for visual analytics systems: foundations, current applications, and research challenges. 11:1-11:10 - Valentin Bruder, Kuno Kurzhals, Steffen Frey

, Daniel Weiskopf
, Thomas Ertl:
Space-time volume visualization of gaze and stimulus. 12:1-12:9 - Nahla J. Abid

, Jonathan I. Maletic, Bonita Sharif
:
Using developer eye movements to externalize the mental model used in code summarization tasks. 13:1-13:9 - Tanja Blascheck

, Bonita Sharif
:
Visually analyzing eye movements on natural language texts and source code snippets. 14:1-14:9 - Unaizah Obaidellah

, Michael Raschke, Tanja Blascheck
:
Classification of strategies for solving programming problems using AoI sequence analysis. 15:1-15:9 - Frank Helbert Borsato

, Carlos H. Morimoto
:
Towards a low cost and high speed mobile eye tracker. 16:1-16:9 - Thiago Santini, Diederick C. Niehorster

, Enkelejda Kasneci:
Get a grip: slippage-robust and glint-free gaze estimation for real-time pervasive head-mounted eye tracking. 17:1-17:10 - Ioannis Agtzidis, Michael Dorr:

Getting (more) real: bringing eye movement classification to HMD experiments with equirectangular stimuli. 18:1-18:8 - Dmytro Katrychuk, Henry K. Griffith, Oleg V. Komogortsev:

Power-efficient and shift-robust eye-tracking sensor for portable VR headsets. 19:1-19:8 - Diako Mardanbegi

, Christopher Clarke
, Hans Gellersen
:
Monocular gaze depth estimation using the vestibulo-ocular reflex. 20:1-20:9 - Pranav Venuprasad, Tushar Dobhal, Anurag Paul

, Tu N. M. Nguyen, Andrew Gilman, Pamela C. Cosman, Leanne Chukoskie
:
Characterizing joint attention behavior during real world interactions using automated object and gaze detection. 21:1-21:8 - Mikhail Startsev, Stefan Göb, Michael Dorr:

A novel gaze event detection metric that is not fooled by gaze-independent baselines. 22:1-22:9 - Kai Dierkes, Moritz Kassner, Andreas Bulling

:
A fast approach to refraction-aware eye-model fitting and gaze prediction. 23:1-23:9 - Masato Sasaki, Takashi Nagamatsu

, Kentaro Takemura:
Screen corner detection using polarization camera for cross-ratio based gaze estimation. 24:1-24:9 - Andrew T. Duchowski, Sophie Jörg, Jaret Screws, Nina A. Gehrer, Michael Schönenberg, Krzysztof Krejtz

:
Guiding gaze: expressive models of reading and face scanning. 25:1-25:9 - Julian Steil, Marion Koelle, Wilko Heuten

, Susanne Boll, Andreas Bulling
:
PrivacEye: privacy-preserving head-mounted eye tracking using egocentric scene image and eye movement features. 26:1-26:10 - Julian Steil, Inken Hagestedt, Michael Xuelin Huang, Andreas Bulling

:
Privacy-aware eye tracking using differential privacy. 27:1-27:9 - Ao Liu, Lirong Xia

, Andrew T. Duchowski, Reynold Bailey
, Kenneth Holmqvist
, Eakta Jain:
Differential privacy for eye-tracking data. 28:1-28:10 - Yasmeen Abdrabou

, Mohamed Khamis
, Rana Mohamed Eisa, Sherif Ismail, Amr El Mougy:
Just gaze and wave: exploring the use of gaze and gestures for shoulder-surfing resilient authentication. 29:1-29:10 - Nishan Gunawardena

, Michael Matscheko
, Bernhard Anzengruber, Alois Ferscha, Martin Schobesberger, Andreas Shamiyeh, Bettina Klugsberger, Peter Solleder:
Assessing surgeons' skill level in laparoscopic cholecystectomy using eye metrics. 30:1-30:8 - Rémy Siegfried

, Yu Yu, Jean-Marc Odobez
:
A deep learning approach for robust head pose independent eye movements recognition from videos. 31:1-31:5 - Per Bækgaard

, John Paulin Hansen
, Katsumi Minakata
, I. Scott MacKenzie
:
A Fitts' law study of pupil dilations in a head-mounted display. 32:1-32:5 - Congcong Liu

, Yuying Chen
, Lei Tai, Haoyang Ye, Ming Liu, Bertram E. Shi
:
A gaze model improves autonomous driving. 33:1-33:5 - André Frank Krause, Kai Essig:

Boosting speed- and accuracy of gradient based dark pupil tracking using vectorization and differential evolution. 34:1-34:5 - Yasmeen Abdrabou

, Mariam Mostafa, Mohamed Khamis
, Amr El Mougy:
Calibration-free text entry using smooth pursuit eye movements. 35:1-35:5 - Christopher G. Harris

:
Detecting cognitive bias in a relevance assessment task using an eye tracker. 36:1-36:5 - Brendan John, Sanjeev J. Koppal

, Eakta Jain:
EyeVEIL: degrading iris authentication in eye tracking headsets. 37:1-37:5 - Cole S. Peterson, Nahla J. Abid

, Corey A. Bryant, Jonathan I. Maletic, Bonita Sharif
:
Factors influencing dwell time during source code reading: a large-scale replication experiment. 38:1-38:4 - Wolfgang Fuhl, Nora Castner, Thomas C. Kübler, Rene Alexander Lotz, Wolfgang Rosenstiel, Enkelejda Kasneci:

Ferns for area of interest free scanpath classification. 39:1-39:5 - Shaharam Eivazi, Thiago Santini, Alireza Keshavarzi, Thomas C. Kübler, Andrea Mazzei:

Improving real-time CNN-based pupil detection through domain-specific data augmentation. 40:1-40:6 - Stefanie Müller

:
Inferring target locations from gaze data: a smartphone study. 41:1-41:4 - Yu Li, Carla Allen, Chi-Ren Shyu:

Quantifying and understanding the differences in visual activities with contrast subsequences. 42:1-42:5 - Conor Kelton, Zijun Wei, Seoyoung Ahn, Aruna Balasubramanian, Samir R. Das, Dimitris Samaras, Gregory J. Zelinsky:

Reading detection in real-time. 43:1-43:5 - Takamasa Utsu, Kentaro Takemura:

Remote corneal imaging by integrating a 3D face model and an eyeball model. 44:1-44:5 - Andoni Larumbe-Bergera

, Sonia Porta
, Rafael Cabeza, Arantxa Villanueva:
SeTA: semiautomatic tool for annotation of eye tracking images. 45:1-45:5 - Davide De Tommaso

, Agnieszka Wykowska
:
TobiiGlassesPySuite: an open-source suite for using the Tobii Pro Glasses 2 in eye-tracking studies. 46:1-46:5 - Natalia Chitalkina:

When you don't see what you expect: incongruence in music and source code reading. 47:1-47:3 - Tanya Bafna, John Paulin Hansen

:
Eye-tracking based fatigue and cognitive assessment: doctoral symposium, extended abstract. 48:1-48:3 - Brendan John:

Pupil diameter as a measure of emotion and sickness in VR. 49:1-49:3 - Guangtao Zhang

, John Paulin Hansen
:
Accessible control of telepresence robots based on eye tracking. 50:1-50:3 - Pablo Fontoura

, Jean-Marie Schaeffer, Michel Menu:
The vision and interpretation of paintings: bottom-up visual processes, top-down culturally informed attention, and aesthetic experience. 51:1-51:3 - Rébaï Soret, Christophe Hurter

, Vsevolod Peysakhovich
:
Attentional orienting in real and virtual 360-degree environments: applications to aeronautics. 52:1-52:3 - Aayush K. Chaudhary:

Motion tracking of iris features for eye tracking. 53:1-53:3 - Sai Akanksha Punuganti, Jing Tian, Jorge Otero-Millan

:
Automatic quick-phase detection in bedside recordings from patients with acute dizziness and nystagmus. 54:1-54:3 - Zhizhuo Yang, Reynold Bailey:

Towards a data-driven framework for realistic self-organized virtual humans: coordinated head and eye movements. 55:1-55:3 - Justyna Zurawska:

Microsaccadic and pupillary response to tactile task difficulty. 56:1-56:3 - Jonathan A. Saddler:

Looks can mean achieving: understanding eye gaze patterns of proficiency in code comprehension. 57:1-57:3 - Norick R. Bowers, Agostino Gibaldi

, Emma Alexander, Martin S. Banks, Austin Roorda:
High-resolution eye tracking using scanning laser ophthalmoscopy. 58:1-58:3 - Andrea Strandberg:

Eye movements during reading and reading assessment in swedish school children: a new window on reading difficulties. 59:1-59:3 - Sébastien Lallé

, Cristina Conati, Dereck Toker:
A gaze-based experimenter platform for designing and evaluating adaptive interventions in information visualizations. 60:1-60:3 - Lucas Paletta

, Amir Dini, Cornelia Murko, Saeed Yahyanejad, Ursula H. Augsdörfer:
Estimation of situation awareness score and performance using eye and head gaze for human-robot collaboration. 61:1-61:3 - Soha Rostaminia, Addison Mayberry, Deepak Ganesan, Benjamin M. Marlin

, Jeremy Gummeson:
iLid: eyewear solution for low-power fatigue and drowsiness monitoring. 62:1-62:3 - Soha Rostaminia, Alexander Lamson, Subhransu Maji, Tauhidur Rahman, Deepak Ganesan:

W!NCE: eyewear solution for upper face action units monitoring. 63:1-63:3 - Wolfgang Fuhl, Efe Bozkir

, Benedikt Hosp
, Nora Castner, David Geisler, Thiago C. Santini, Enkelejda Kasneci:
Encodji: encoding gaze data into emoji space for an amusing scanpath classification approach ;). 64:1-64:4 - Ayush Kumar, Anjul Kumar Tyagi, Michael Burch, Daniel Weiskopf

, Klaus Mueller:
Task classification model for visual fixation, exploration, and search. 65:1-65:4 - Alexandre Milisavljevic, Thomas Le Bras, Matei Mancas

, Coralie Petermann, Bernard Gosselin, Karine Doré-Mazars:
Towards a better description of visual exploration through temporal dynamic of ambient and focal modes. 66:1-66:4 - Sudeep Raj, Chia-Chien Wu, Shreya Raj, Nada Attar

:
Understanding the relationship between microsaccades and pupil dilation. 67:1-67:4 - Francisco López Luro, Veronica Sundstedt:

A comparative study of eye tracking and hand controller for aiming tasks in virtual reality. 68:1-68:9 - Katsumi Minakata

, John Paulin Hansen
, I. Scott MacKenzie
, Per Bækgaard
, Vijay Rajanna:
Pointing by gaze, head, and foot in a head-mounted display. 69:1-69:9 - Guangtao Zhang

, John Paulin Hansen
, Katsumi Minakata
:
Hand- and gaze-control of telepresence robots. 70:1-70:8 - Michael Xuelin Huang, Andreas Bulling

:
SacCalib: reducing calibration distortion for stationary eye trackers using saccadic eye movements. 71:1-71:10 - Diako Mardanbegi

, Thomas D. W. Wilcockson
, Pete Sawyer, Hans Gellersen
, Trevor J. Crawford
:
SaccadeMachine: software for analyzing saccade tests (anti-saccade and pro-saccade). 72:1-72:8 - Sheikh Rivu

, Yasmeen Abdrabou
, Thomas Mayer, Ken Pfeuffer, Florian Alt
:
GazeButton: enhancing buttons with eye gaze interactions. 73:1-73:7 - Korok Sengupta

, Raphael Menges
, Chandan Kumar, Steffen Staab
:
Impact of variable positioning of text prediction in gaze-based text entry. 74:1-74:9 - Päivi Majaranta

, Jari Laitinen, Jari Kangas
, Poika Isokoski:
Inducing gaze gestures by static illustrations. 75:1-75:5 - Diako Mardanbegi

, Thies Pfeiffer
:
EyeMRTK: a toolkit for developing eye gaze interactive applications in virtual and augmented reality. 76:1-76:5 - Pawel Kasprowski, Katarzyna Harezlak:

Using mutual distance plot and warped time distance chart to compare scan-paths of multiple observers. 77:1-77:5 - Fabian Deitelhoff, Andreas Harrer

, Andrea Kienle:
An intuitive visualization for rapid data analysis: using the DNA metaphor for eye movement patterns. 78:1-78:5 - Sarah D'Angelo, Jeff Brewer, Darren Gergle:

Iris: a tool for designing contextually relevant gaze visualizations. 79:1-79:5 - Andrew T. Duchowski, Nina A. Gehrer, Michael Schönenberg, Krzysztof Krejtz

:
Art facing science: Artistic heuristics for face detection: tracking gaze when looking at faces. 80:1-80:5 - Ayush Kumar, Michael Burch, Klaus Mueller:

Visually comparing eye movements over space and time. 81:1-81:9 - Ayush Kumar, Neil Timmermans, Michael Burch, Klaus Mueller:

Clustered eye movement similarity matrices. 82:1-82:9 - Michael Burch, Ayush Kumar, Klaus Mueller, Titus Kervezee, Wouter W. L. Nuijten

, Rens Oostenbach, Lucas Peeters, Gijs Smit:
Finding the outliers in scanpath data. 83:1-83:5 - Kenan Bektas

, Arzu Çöltekin
, Jens Krüger, Andrew T. Duchowski, Sara Irina Fabrikant:
GeoGCD: improved visual search via gaze-contingent display. 84:1-84:10 - Oleg Spakov

, Howell O. Istance, Kari-Jouko Räihä, Tiia Viitanen, Harri Siirtola
:
Eye gaze and head gaze in collaborative games. 85:1-85:9 - Rébaï Soret, Pom Charras, Christophe Hurter

, Vsevolod Peysakhovich
:
Attentional orienting in virtual reality using endogenous and exogenous cues in auditory and visual modalities. 86:1-86:8 - Fabian Göbel, Peter Kiefer

:
POITrack: improving map-based planning with implicit POI tracking. 87:1-87:9 - Haofei Wang, Bertram E. Shi

:
Gaze awareness improves collaboration efficiency in a collaborative assembly task. 88:1-88:5 - Michael Burch:

Interaction graphs: visual analysis of eye movement data from interactive stimuli. 89:1-89:5 - Sandeep Vidyapu, Vijaya Saradhi Vedula, Samit Bhattacharya:

Quantitative visual attention prediction on webpage images using multiclass SVM. 90:1-90:9 - Agnieszka Ozimek, Paulina Lewandowska, Krzysztof Krejtz

, Andrew T. Duchowski:
Attention towards privacy notifications on web pages. 91:1-91:5 - Mónica Cortiñas, Raquel Chocarro, Arantxa Villanueva:

Image, brand and price info: do they always matter the same? 92:1-92:8 - Michael Burch, Ayush Kumar, Neil Timmermans:

An interactive web-based visual analytics tool for detecting strategic eye movement patterns. 93:1-93:5

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














