default search action
Chris Davis 0001
Person information
- affiliation: University of Western Sydney, MARCS Auditory Laboratories, Australia
- affiliation (former): University of Melbourne, Australia
Other persons with the same name
- Chris Davis — disambiguation page
- Chris Davis 0002 (aka: Chris Irwin Davis) — University of Texas at Dallas, Richardson, TX, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2010 – 2019
- 2019
- [c63]Chris Davis, Jeesun Kim:
Auditory and Visual Emotion Recognition: Investigating why some portrayals are better recognized than others. AVSP 2019: 33-37 - [c62]April Shi Min Ching, Jeesun Kim, Chris Davis:
Auditory-Visual Integration During the Attentional Blink. AVSP 2019: 63-68 - [c61]Chris Davis, Jeesun Kim:
Perceiving Older Adults Producing Clear and Lombard Speech. INTERSPEECH 2019: 3103-3107 - [e2]Chris Davis:
15th International Conference on Auditory-Visual Speech Processing, AVSP 2019, Melbourne, Australia, August 10-11, 2019. ISCA 2019 [contents] - 2018
- [j10]Jeesun Kim, Gérard Bailly, Chris Davis:
Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines. Speech Commun. 98: 63-67 (2018) - [j9]Chee Seng Chong, Jeesun Kim, Chris Davis:
Disgust expressive speech: The acoustic consequences of the facial expression of emotion. Speech Commun. 98: 68-72 (2018) - [c60]Jeesun Kim, Sonya Karisma, Vincent Aubanel, Chris Davis:
Investigating the Role of Familiar Face and Voice Cues in Speech Processing in Noise. INTERSPEECH 2018: 2276-2279 - [c59]Chris Davis, Jeesun Kim:
Characterizing Rhythm Differences between Strong and Weak Accented L2 Speech. INTERSPEECH 2018: 2568-2572 - 2017
- [j8]Sarah E. Fenwick, Catherine T. Best, Chris Davis, Michael D. Tyler:
The influence of auditory-visual speech and clear speech on cross-language perceptual assimilation. Speech Commun. 92: 114-124 (2017) - [c58]Chris Davis, Jeesun Kim, Outi Tuomainen, Valérie Hazan:
The effect of age and hearing loss on partner-directed gaze in a communicative task. AVSP 2017: 12-15 - [c57]Vincent Aubanel, Cassandra Masters, Jeesun Kim, Chris Davis:
Contribution of visual rhythmic information to speech perception in noise. AVSP 2017: 95-99 - [c56]Chris Davis, Chee Seng Chong, Jeesun Kim:
The Effect of Spectral Profile on the Intelligibility of Emotional Speech in Noise. INTERSPEECH 2017: 581-585 - [e1]Slim Ouni, Chris Davis, Alexandra Jesse, Jonas Beskow:
14th International Conference on Auditory-Visual Speech Processing, AVSP 2017, Stockholm, Sweden, August 25-26, 2017. ISCA 2017 [contents] - 2016
- [j7]Tim Paris, Jeesun Kim, Chris Davis:
The Processing of Attended and Predicted Sounds in Time. J. Cogn. Neurosci. 28(1): 158-165 (2016) - [c55]Chee Seng Chong, Jeesun Kim, Chris Davis:
The Sound of Disgust: How Facial Expression May Influence Speech Production. INTERSPEECH 2016: 37-41 - [c54]Jeesun Kim, Chris Davis:
The Consistency and Stability of Acoustic and Visual Cues for Different Prosodic Attitudes. INTERSPEECH 2016: 57-61 - [c53]Sarah E. Fenwick, Catherine T. Best, Chris Davis, Michael D. Tyler:
The Influence of Modality and Speaking Style on the Assimilation Type and Categorization Consistency of Non-Native Speech. INTERSPEECH 2016: 1016-1020 - 2015
- [j6]Michael Fitzpatrick, Jeesun Kim, Chris Davis:
The effect of seeing the interlocutor on auditory and visual speech production in noise. Speech Commun. 74: 37-51 (2015) - [c52]Simone Simonetti, Jeesun Kim, Chris Davis:
Cross-modality matching of linguistic prosody in older and younger adults. AVSP 2015: 17-21 - [c51]Chee Seng Chong, Jeesun Kim, Chris Davis:
Visual vs. auditory emotion information: how language and culture affect our bias towards the different modalities. AVSP 2015: 46-51 - [c50]Hansjörg Mixdorff, Angelika Hönemann, Jeesun Kim, Chris Davis:
Anticipation of turn-Switching in auditory-visual dialogs. AVSP 2015: 52-56 - [c49]Sarah Fenwick, Chris Davis, Catherine T. Best, Michael D. Tyler:
The effect of modality and speaking style on the discrimination of non-native phonological and phonetic contrasts in noise. AVSP 2015: 67-72 - [c48]Chris Davis, Jeesun Kim, Vincent Aubanel, Gregory Zelic, Yatin Mahajan:
The stability of mouth movements for multiple talkers over multiple sessions. AVSP 2015: 99-102 - [c47]Vincent Aubanel, Chris Davis, Jeesun Kim:
Explaining the visual and masked-visual advantage in speech perception in noise: the role of visual phonetic cues. AVSP 2015: 132-136 - [c46]Vincent Aubanel, Chris Davis, Jeesun Kim:
Syllabic structure and informational content in English and Spanish. ICPhS 2015 - [c45]Chris Davis, Jason A. Shaw, Michael I. Proctor, Donald Derrick, Stacey Sherwood, Jeesun Kim:
Examining speech production using masked priming. ICPhS 2015 - [c44]Saya Kawase, Jeesun Kim, Vincent Aubanel, Chris Davis:
Influences of visual speech information on the perception of foreign-accented speech in noise. ICPhS 2015 - [c43]Jeesun Kim, Vincent Aubanel, Chris Davis:
The effect of auditory and visual signal availability on speech perception. ICPhS 2015 - [c42]Simone Simonetti, Jeesun Kim, Chris Davis:
Auditory, visual, and auditory-visual spoken emotion recognition in young and old adults. ICPhS 2015 - [c41]Simone Simonetti, Jeesun Kim, Chris Davis:
Cross-modality matching of linguistic and emotional prosody. INTERSPEECH 2015: 56-59 - [c40]Chee Seng Chong, Jeesun Kim, Chris Davis:
Exploring acoustic differences between Cantonese (tonal) and English (non-tonal) spoken expressions of emotions. INTERSPEECH 2015: 1522-1526 - 2014
- [j5]Jeesun Kim, Chris Davis:
Comparing the consistency and distinctiveness of speech produced in quiet and in noise. Comput. Speech Lang. 28(2): 598-606 (2014) - [j4]Jeesun Kim, Erin Cvejic, Chris Davis:
Tracking eyebrows and head gestures associated with spoken prosody. Speech Commun. 57: 317-330 (2014) - [c39]Yatin Mahajan, Jeesun Kim, Chris Davis:
Does elderly speech recognition in noise benefit from spectral and visual cues? INTERSPEECH 2014: 2021-2025 - [c38]Vincent Aubanel, Chris Davis, Jeesun Kim:
Interplay of informational content and energetic masking in speech perception in noise. INTERSPEECH 2014: 2046-2049 - [c37]Chee Seng Chong, Jeesun Kim, Chris Davis:
The effect of expression clarity and presentation modality on non-native vocal emotion perception. O-COCOSDA 2014: 1-5 - 2013
- [c36]Gregory Zelic, Jeesun Kim, Chris Davis:
Spontaneous synchronisation between repetitive speech and rhythmic gesture. AVSP 2013: 17-20 - [c35]Michael Fitzpatrick, Jeesun Kim, Chris Davis:
Auditory and auditory-visual Lombard speech perception by younger and older adults. AVSP 2013: 105-110 - [c34]Chris Davis, Jeesun Kim:
Detecting auditory-visual speech synchrony: how precise? AVSP 2013: 117-122 - [c33]Jeesun Kim, Chris Davis:
How far out? the effect of peripheral visual speech on speech perception. AVSP 2013: 123-128 - [c32]Jeesun Kim, Ruben Demirdjian, Chris Davis:
Spontaneous and explicit speech imitation. INTERSPEECH 2013: 544-547 - [c31]Chris Davis, Jeesun Kim:
The effect of visual speech timing and form cues on the processing of speech and nonspeech. INTERSPEECH 2013: 1639-1642 - 2012
- [c30]Jeesun Kim, Chris Davis, Christine Kitamura:
Auditory-visual speech to infants and adults: signals and correlations. INTERSPEECH 2012: 1119-1122 - [c29]Michael Fitzpatrick, Jeesun Kim, Chris Davis:
The Intelligibility of Lombard Speech: Communicative setting matters. INTERSPEECH 2012: 1720-1723 - 2011
- [c28]Tim Paris, Jeesun Kim, Chris Davis:
Visual speech influences speeded auditory identification. AVSP 2011: 5-8 - [c27]Erin Cvejic, Jeesun Kim, Chris Davis:
Perceiving visual prosody from point-light displays. AVSP 2011: 15-20 - [c26]Michael Fitzpatrick, Jeesun Kim, Chris Davis:
The effect of seeing the interlocutor on auditory and visual speech production in noise. AVSP 2011: 31-35 - [c25]Jeesun Kim, Chris Davis:
Audiovisual speech processing in visual speech noise. AVSP 2011: 73-76 - [c24]Jeesun Kim, Chris Davis:
Testing Audio-Visual Familiarity Effects on Speech Perception in Noise. ICPhS 2011: 1062-1065 - [c23]Erin Cvejic, Jeesun Kim, Chris Davis:
Temporal Relationship Between Auditory and Visual Prosodic Cues. INTERSPEECH 2011: 981-984 - [c22]Jeesun Kim, Chris Davis:
Auditory Speech Processing is Affected by Visual Speech in the Periphery. INTERSPEECH 2011: 2465-2468 - [c21]Tim Paris, Jeesun Kim, Chris Davis:
Visual Speech Speeds Up Auditory Identification Responses. INTERSPEECH 2011: 2469-2472 - [c20]Michael Fitzpatrick, Jeesun Kim, Chris Davis:
The Effect of Seeing the Interlocutor on Speech Production in Different Noise Types. INTERSPEECH 2011: 2829-2832 - 2010
- [j3]Erin Cvejic, Jeesun Kim, Chris Davis:
Prosody off the top of the head: Prosodic contrasts can be discriminated by head motion. Speech Commun. 52(6): 555-564 (2010) - [c19]Erin Cvejic, Jeesun Kim, Chris Davis:
Abstracting visual prosody across speakers and face areas. AVSP 2010: 3-1 - [c18]Jeesun Kim, Chris Davis:
Emotion perception by eye and ear and halves and wholes. AVSP 2010: 3-2 - [c17]Erin Cvejic, Jeesun Kim, Chris Davis, Guillaume Gibert:
Prosody for the eyes: quantifying visual prosody using guided principal component analysis. INTERSPEECH 2010: 1433-1436
2000 – 2009
- 2009
- [c16]Chris Davis, Jeesun Kim:
Recognizing spoken vowels in multi-talker babble: spectral and visual speech cues. AVSP 2009: 130-133 - [c15]Anne Cutler, Chris Davis, Jeesun Kim:
Non-automaticity of use of orthographic knowledge in phoneme evaluation. INTERSPEECH 2009: 380-383 - [c14]Jeesun Kim, Chris Davis, Christian Kroos, Harold Hill:
Speaker discriminability for visual speech modes. INTERSPEECH 2009: 2259-2262 - 2008
- [c13]Jeesun Kim, Christian Kroos, Chris Davis:
Hearing a talking face: an auditory influence on a visual detection task. AVSP 2008: 107-110 - [c12]Denis Burnham, Arman Abrahamyan, Lawrence Cavedon, Chris Davis, Andrew Hodgins, Jeesun Kim, Christian Kroos, Takaaki Kuratate, Trent W. Lewis, Martin H. Luerssen, Garth Paine, David M. W. Powers, Marcia Riley, Stelarc, Kate Stevens:
From talking to thinking heads: report 2008. AVSP 2008: 127-130 - [c11]Chris Davis, Jeesun Kim, Angelo Barbaro:
Masked speech priming: no priming in dense neighbourhoods. INTERSPEECH 2008: 2040-2043 - [c10]Erin Cvejic, Jeesun Kim, Chris Davis:
Visual speech modifies the phoneme restoration effect. INTERSPEECH 2008: 2057 - 2007
- [c9]Chris Davis, Jeesun Kim, Takaaki Kuratate, Johnson Chen, Stelarc, Denis Burnham:
Making a thinking-talking head. AVSP 2007: 8 - [c8]Jeesun Kim, Chris Davis:
Restoration effects in auditory and visual speech. AVSP 2007 - 2005
- [c7]Jeesun Kim, Chris Davis, Guillaume Vignali, Harold Hill:
A visual concomitant of the Lombard reflex. AVSP 2005: 17-22 - 2004
- [j2]Jeesun Kim, Chris Davis:
Investigating the audio-visual speech detection advantage. Speech Commun. 44(1-4): 19-30 (2004) - [c6]Jinyoung Kim, Jeesun Kim, Chris Davis:
Audio-visual spoken language processing. INTERSPEECH 2004: 1133-1136 - [c5]Chris Davis, Jeesun Kim:
Of the top of the head: audio-visual speech perception from the nose up. INTERSPEECH 2004: 1153-1156 - 2003
- [c4]Jeesun Kim, Chris Davis:
Testing the cuing hypothesis for the AV speech detection advantage. AVSP 2003: 9-12 - 2001
- [j1]Chris Davis, Jeesun Kim:
Repeating and Remembering Foreign Language Words: Implications for Language Teaching Systems. Artif. Intell. Rev. 16(1): 37-47 (2001) - [c3]Jeesun Kim, Chris Davis:
Visible speech cues and auditory detection of spoken sentences: an effect of degree of correlation between acoustic and visual properties. AVSP 2001: 127-131
1990 – 1999
- 1999
- [c2]Chris Davis, Jeesun Kim:
Perception of clearly presented foreign language sounds: The effects of visible speech. AVSP 1999: 12 - 1998
- [c1]Chris Davis, Jeesun Kim:
Repeating and Remembering Foreign Language Words: Does Seeing Help? AVSP 1998: 121-126
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-08-05 20:25 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint