default search action
Joshua Newn
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j11]Baosheng James Hou, Joshua Newn, Ludwig Sidenmark, Anam Ahmad Khan, Hans Gellersen:
GazeSwitch: Automatic Eye-Head Mode Switching for Optimised Hands-Free Pointing. Proc. ACM Hum. Comput. Interact. 8(ETRA): 1-20 (2024) - [j10]Francesco Chiossi, Uwe Gruenefeld, Baosheng James Hou, Joshua Newn, Changkun Ou, Rulu Liao, Robin Welsch, Sven Mayer:
Understanding the Impact of the Reality-Virtuality Continuum on Visual Search Using Fixation-Related Potentials and Eye Tracking Features. Proc. ACM Hum. Comput. Interact. 8(MHCI): 1-33 (2024) - 2023
- [j9]Ludwig Sidenmark, Franziska Prummer, Joshua Newn, Hans Gellersen:
Comparing Gaze, Head and Controller Selection of Dynamically Revealed Targets in Head-Mounted Displays. IEEE Trans. Vis. Comput. Graph. 29(11): 4740-4750 (2023) - [c29]Riccardo Bovo, Daniele Giunchi, Ludwig Sidenmark, Joshua Newn, Hans Gellersen, Enrico Costanza, Thomas Heinis:
Speech-Augmented Cone-of-Vision for Exploratory Data Analysis. CHI 2023: 162:1-162:18 - [c28]Baosheng James Hou, Joshua Newn, Ludwig Sidenmark, Anam Ahmad Khan, Per Bækgaard, Hans Gellersen:
Classifying Head Movements to Separate Head-Gaze and Head Gestures as Distinct Modes of Input. CHI 2023: 253:1-253:14 - [c27]Ludwig Sidenmark, Christopher Clarke, Joshua Newn, Mathias N. Lystbæk, Ken Pfeuffer, Hans Gellersen:
Vergence Matching: Inferring Attention to Objects in 3D Environments for Gaze-Assisted Selection. CHI 2023: 257:1-257:15 - [c26]Joshua Newn, Madison Klarkowski:
Biofeedback-Driven Multiplayer Games: Leveraging Social Awareness and Physiological Signals for Play. CHI PLAY (Companion) 2023: 212-215 - [c25]Joshua Newn, Sophia Quesada, Baosheng James Hou, Anam Ahmad Khan, Florian Weidner, Hans Gellersen:
Exploring Eye Expressions for Enhancing EOG-Based Interaction. INTERACT (4) 2023: 68-79 - 2022
- [j8]Joshua Newn, Ryan M. Kelly, Simon D'Alfonso, Reeva Lederman:
Examining and Promoting Explainable Recommendations for Personal Sensing Technology Acceptance. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6(3): 133:1-133:27 (2022) - [c24]Anam Ahmad Khan, Sadia Nawaz, Joshua Newn, Ryan M. Kelly, Jason M. Lodge, James Bailey, Eduardo Velloso:
To type or to speak? The effect of input modality on text understanding during note-taking. CHI 2022: 164:1-164:15 - [c23]Anam Ahmad Khan, Joshua Newn, James Bailey, Eduardo Velloso:
Integrating Gaze and Speech for Enabling Implicit Interactions. CHI 2022: 349:1-349:14 - 2021
- [j7]Anam Ahmad Khan, Joshua Newn, Ryan M. Kelly, Namrata Srivastava, James Bailey, Eduardo Velloso:
GAVIN: Gaze-Assisted Voice-Based Implicit Note-taking. ACM Trans. Comput. Hum. Interact. 28(4): 26:1-26:32 (2021) - [c22]Abdallah El Ali, Monica Perusquía-Hernández, Mariam Hassib, Yomna Abdelrahman, Joshua Newn:
MEEC: Second Workshop on Momentary Emotion Elicitation and Capture. CHI Extended Abstracts 2021: 114:1-114:6 - [c21]Melissa J. Rogerson, Joshua Newn, Ronal Singh, Emma Baillie, Michael Papasimeon, Lyndon Benke, Tim Miller:
Observing Multiplayer Boardgame Play at a Distance. CHI PLAY 2021: 262-267 - [c20]Vincent Crocher, Ronal Singh, Joshua Newn, Denny Oetomo:
Towards a Gaze-Informed Movement Intention Model for Robot-Assisted Upper-Limb Rehabilitation. EMBC 2021: 6155-6158 - [c19]Namrata Srivastava, Sadia Nawaz, Joshua Newn, Jason M. Lodge, Eduardo Velloso, Sarah M. Erfani, Dragan Gasevic, James Bailey:
Are you with me? Measurement of Learners' Video-Watching Attention with Eye Tracking. LAK 2021: 88-98 - [i3]Anam Ahmad Khan, Joshua Newn, Ryan Kelly, Namrata Srivastava, James Bailey, Eduardo Velloso:
GAVIN: Gaze-Assisted Voice-Based Implicit Note-taking. CoRR abs/2104.00870 (2021) - 2020
- [b1]Joshua Newn:
Gaze-Based Intention Recognition for Human-Agent Collaboration: Towards Nonverbal Communication in Human-AI Interaction. University of Melbourne, Parkville, Victoria, Australia, 2020 - [j6]Ronal Rajneshwar Singh, Tim Miller, Joshua Newn, Eduardo Velloso, Frank Vetere, Liz Sonenberg:
Combining gaze and AI planning for online human intention recognition. Artif. Intell. 284: 103275 (2020) - [j5]Difeng Yu, Qiushi Zhou, Joshua Newn, Tilman Dingler, Eduardo Velloso, Jorge Gonçalves:
Fully-Occluded Target Selection in Virtual Reality. IEEE Trans. Vis. Comput. Graph. 26(12): 3402-3413 (2020) - [j4]Qiushi Zhou, Difeng Yu, Martin N. Reinoso, Joshua Newn, Jorge Gonçalves, Eduardo Velloso:
Eyes-free Target Acquisition During Walking in Immersive Mixed Reality. IEEE Trans. Vis. Comput. Graph. 26(12): 3423-3433 (2020) - [c18]Ebrahim Babaei, Namrata Srivastava, Joshua Newn, Qiushi Zhou, Tilman Dingler, Eduardo Velloso:
Faces of Focus: A Study on the Facial Cues of Attentional States. CHI 2020: 1-13 - [i2]Anam Ahmad Khan, Sadia Nawaz, Joshua Newn, Jason M. Lodge, James Bailey, Eduardo Velloso:
Using voice note-taking to promote learners' conceptual understanding. CoRR abs/2012.02927 (2020)
2010 – 2019
- 2019
- [j3]Yomna Abdelrahman, Anam Ahmad Khan, Joshua Newn, Eduardo Velloso, Sherine Ashraf Safwat, James Bailey, Andreas Bulling, Frank Vetere, Albrecht Schmidt:
Classifying Attention Types with Thermal Imaging and Eye Tracking. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3(3): 69:1-69:27 (2019) - [c17]Niels Wouters, Ryan M. Kelly, Eduardo Velloso, Katrin Wolf, Hasan Shahid Ferdous, Joshua Newn, Zaher Joukhadar, Frank Vetere:
Biometric Mirror: Exploring Ethical Opinions towards Facial Analysis and Automated Decision-Making. Conference on Designing Interactive Systems 2019: 447-461 - [c16]Fraser Allison, Joshua Newn, Wally Smith, Marcus Carter, Martin R. Gibbs:
Frame Analysis of Voice Interaction Gameplay. CHI 2019: 393 - [c15]Qiushi Zhou, Joshua Newn, Namrata Srivastava, Tilman Dingler, Jorge Gonçalves, Eduardo Velloso:
Cognitive Aid: Task Assistance Based On Mental Workload Estimation. CHI Extended Abstracts 2019 - [c14]Joshua Newn, Ronal Rajneshwar Singh, Eduardo Velloso, Frank Vetere:
Combining implicit gaze and AI for real-time intention projection. UbiComp/ISWC Adjunct 2019: 324-327 - [c13]Joshua Newn, Benjamin Tag, Ronal Rajneshwar Singh, Eduardo Velloso, Frank Vetere:
AI-mediated gaze-based intention recognition for smart eyewear: opportunities & challenges. UbiComp/ISWC Adjunct 2019: 637-642 - [c12]Qiushi Zhou, Joshua Newn, Benjamin Tag, Hao-Ping Lee, Chaofan Wang, Eduardo Velloso:
Ubiquitous smart eyewear interactions using implicit sensing and unobtrusive information output. UbiComp/ISWC Adjunct 2019: 661-666 - [c11]Joshua Newn, Ronal Rajneshwar Singh, Fraser Allison, Prashan Madumal, Eduardo Velloso, Frank Vetere:
Designing Interactions with Intention-Aware Gaze-Enabled Artificial Agents. INTERACT (2) 2019: 255-281 - 2018
- [j2]Namrata Srivastava, Joshua Newn, Eduardo Velloso:
Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2(4): 189:1-189:27 (2018) - [c10]Ronal Rajneshwar Singh, Tim Miller, Joshua Newn, Liz Sonenberg, Eduardo Velloso, Frank Vetere:
Combining Planning with Gaze for Online Human Intention Recognition. AAMAS 2018: 488-496 - [c9]Joshua Newn:
Enabling Intent Recognition Through Gaze Awareness in User Interfaces. CHI Extended Abstracts 2018 - [c8]Joshua Newn, Fraser Allison, Eduardo Velloso, Frank Vetere:
Looks Can Be Deceiving: Using Gaze Visualisation to Predict and Mislead Opponents in Strategic Gameplay. CHI 2018: 261 - [c7]Michael Lankes, Joshua Newn, Bernhard Maurer, Eduardo Velloso, Martin Dechant, Hans Gellersen:
EyePlay Revisited: Past, Present and Future Challenges for Eye-Based Interaction in Games. CHI PLAY (Companion) 2018: 689-693 - [c6]Oludamilare Matthews, Zhanna Sarsenbayeva, Weiwei Jiang, Joshua Newn, Eduardo Velloso, Sarah Clinch, Jorge Gonçalves:
Inferring the Mood of a Community From Their Walking Speed: A Preliminary Study. UbiComp/ISWC Adjunct 2018: 1144-1149 - [c5]Prashan Madumal, Ronal Rajneshwar Singh, Joshua Newn, Frank Vetere:
Interaction design for explainable AI: workshop proposal. OZCHI 2018: 607-608 - [i1]Prashan Madumal, Ronal Rajneshwar Singh, Joshua Newn, Frank Vetere:
Interaction Design for Explainable AI: Workshop Proceedings. CoRR abs/1812.08597 (2018) - 2017
- [j1]Eduardo Velloso, Marcus Carter, Joshua Newn, Augusto Esteves, Christopher Clarke, Hans Gellersen:
Motion Correlation: Selecting Objects by Matching Their Movement. ACM Trans. Comput. Hum. Interact. 24(3): 22:1-22:35 (2017) - [c4]Joshua Newn, Eduardo Velloso, Fraser Allison, Yomna Abdelrahman, Frank Vetere:
Evaluating Real-Time Gaze Representations to Infer Intentions in Competitive Turn-Based Strategy Games. CHI PLAY 2017: 541-552 - 2016
- [c3]Joshua Newn, Eduardo Velloso, Marcus Carter, Frank Vetere:
Exploring the Effects of Gaze Awareness on Multiplayer Gameplay. CHI PLAY (Companion) 2016: 239-244 - [c2]Joshua Newn, Eduardo Velloso, Marcus Carter, Frank Vetere:
Multimodal Segmentation on a Large Interactive Tabletop: Extending Interaction on Horizontal Surfaces with Gaze. ISS 2016: 251-260 - 2015
- [c1]Marcus Carter, Joshua Newn, Eduardo Velloso, Frank Vetere:
Remote Gaze and Gesture Tracking on the Microsoft Kinect: Investigating the Role of Feedback. OZCHI 2015: 167-176
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-02 21:31 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint