default search action
AVSP 1997: Rhodes, Greece
- Christian Benoît, Ruth Campbell:
ESCA Workshop on Audio-Visual Speech Processing, AVSP '97, Rhodes, Greece, September 26-27, 1997. ISCA 1997 - Ruth Campbell, Philip J. Benson, Simon B. Wallace:
The perception of mouthshape: photographic images of natural speech sounds can be perceived categorically. 1-4 - Emanuela Magno Caldognetto, Claudio Zmarich, Piero Cosi, Franco Ferrero:
Italian consonantal visemes: relationships between spatial/ temporal articulatory characteristics and coproduced acoustic signal. 5-8 - Shizuo Hiki, Yumiko Fukuda:
Negative effect of homophones on speechreading in Japanese. 9-12 - Jacqueline Leybaert, Daniela Marchetti:
Visual rhyming effects in deaf children. 13-16 - Isabella Poggi, Catherine Pelachaud:
Context sensitive faces. 17-20 - Edward T. Auer Jr., Lynne E. Bernstein, R. S. Waldstein, P. E. Tucker:
Effects of phonetic variation and the structure of the lexicon on the uniqueness of words. 21-24 - Loredana Cerrato, Federico Albano Leoni, Andrea Paoloni:
A methodology to quantify the contribution of visual and prosodic information to the process of speech comprehension. 25-28 - Jean-Pierre Gagné, Lina Boutin:
The effects of speaking rate on visual speech intelligibility. 29-32 - Emanuela Magno Caldognetto, Isabella Poggi:
Micro- and macro-bimodality. 33-36 - Laurent Girin, Jean-Luc Schwartz, Gang Feng:
Can the visual input make the audio signal "pop out" in noise ? a first study of the enhancement of noisy VCV acoustic sequences by audio-visual fusion. 37-40 - Hani Yehia, Philip Rubin, Eric Vatikiotis-Bateson:
Quantitative association of orofacial and vocal-tract shapes. 41-44 - Björn Lyxell, Ulf Andersson, Stig Arlinger, Henrik Harder, Jerker Rönnberg:
Phonological representaion and speech understanding with cochlear implants in deafened adults. 45-48 - Régine André-Obrecht, Bruno Jacob, Nathalie Parlangeau:
Audio visual speech recognition and segmental master slave HMM. 49-52 - Stephen J. Cox, Iain A. Matthews, J. Andrew Bangham:
Combining noise compensation with visual information in speech recognition. 53-56 - Gabi Krone, B. Talk, Andreas Wichert, Günther Palm:
Neural architectures for sensorfusion in speechrecognition. 57-60 - Alexandrina Rogozan, Paul Deléglise, Mamoun Alissali:
Adaptive determination of audio and visual weights for automatic speech recognition. 61-64 - Gerasimos Potamianos, Eric Cosatto, Hans Peter Graf, David B. Roe:
Speaker independent audio-visual database for bimodal ASR. 65-68 - Pierre Jourlin:
Word-dependent acoustic-labial weights in HMM-based speech recognition. 69-72 - Robert E. Remez, Jennifer M. Fellowes, David B. Pisoni, Winston D. Goh, Philip Rubin:
Audio-visual speech perception without traditional speech cues: a second report. 73-76 - Béatrice de Gelder, Nancy Etcoff, Jean Vroomen:
Impairment of visual speech integration in prosopagnosia. 77-80 - C. Schwippert, Christian Benoît:
Audiovisual intelligibility of an androgynous speaker. 81-84 - Ruth Campbell, A. Whittingham, U. Frith, Dominic W. Massaro, Michael M. Cohen:
Audiovisual speech perception in dyslexics: impaired unimodal perception but no audiovisual integration deficit. 85-88 - Lynne E. Bernstein, Paul Iverson, Edward T. Auer Jr.:
Elucidating the complex relationships between phonetic perception and word recognition in audiovisual speech perception. 89-92 - Denis Burnham, Sheila Keane:
The Japanese Mcgurk effect: the role of linguistic and cultural factors an auditory-visual speech perception. 93-96 - Paul Bertelson, Jean Vroomen, Béatrice de Gelder:
Auditory-visual interaction in voice localization and in bimodal speech recognition: the effects of desynchronization. 97-100 - Mikko Sams, Veikko Surakka, Pia Helin, Riitta Kättö:
Audiovisual fusion in finnish syllables and words. 101-104 - Akira Ichikawa, Yoichiro Okada, Atsushi Imiya, K. Horiuchi:
Analytical method for linguistic information of facial gestures in natural dialogue languages. 105-108 - Bogdan Raducanu, Manuel Graña:
An approach to face localization based on signature analysis. 109-112 - Uwe Meier, Rainer Stiefelhagen, Me Yang:
Preprocessing of visual speech under real world conditions. 113-116 - Lionel Revéret, Frederique Garcia, Christian Benoît, Eric Vatikiotis-Bateson:
An hybrid approach to orientation-free liptracking. 117-120 - Sumit Basu, Alex Pentland:
Recovering 3d lip structure from 2d observations using a model trained from video. 121-124 - Michael Vogt:
Interpreted multi-state lip models for audio-visual speech recognition. 125-128 - Anne H. Anderson, Art Blokland:
Intelligibility of speech mediated by low frame-rate video. 129-132 - David F. McAllister, Robert D. Rodman, Donald L. Bitzer, Andrew S. Freeman:
Lip synchronization of speech. 133-136 - Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano:
Speech to lip movement synthesis by HMM. 137-140 - Tony Ezzat, Tomaso A. Poggio:
Videorealistic talking faces: a morphing approach. 141-144 - Bertrand Le Goff, Christian Benoît:
A French-speaking synthetic head. 145-148 - Jonas Beskow:
Animation of talking agents. 149-152 - Christoph Bregler, Michele Covell, Malcolm Slaney:
Video rewrite: visual speech synthesis from video. 153-156
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.