


default search action
ICSLP 1994: Yokohama, Japan
- The 3rd International Conference on Spoken Language Processing, ICSLP 1994, Yokohama, Japan, September 18-22, 1994. ISCA 1994

Plenary Lectures
- Ilse Lehiste:

Poetic metre, prominence, and the perception of prosody: a case of intersection of art and science of spoken language. 2237-2244 - Shizuo Hiki:

Possibilities of compensating for defects in speech perception and production. 2245-2252 - Willem J. M. Levelt:

On the skill of speaking: how do we access words? 2253-2258
Integration of Speech and Natural Language Processing
- Toshiyuki Takezawa, Tsuyoshi Morimoto:

An efficient predictive LR parser using pause information for continuously spoken sentence recognition. 1-4 - Kyunghee Kim, Geunbae Lee, Jong-Hyeok Lee, Hong Jeong:

Integrating TDNN-based diphone recognition with table-driven morphology parsing for understanding of spoken Korean. 5-8 - Frank O. Wallerstein, Akio Amano, Nobuo Hataoka:

Implementation issues and parsing speed evaluation of HMM-LR parser. 9-12 - Kenji Kita, Yoneo Yano, Tsuyoshi Morimoto:

One-pass continuous speech recognition directed by generalized LR parsing. 13-16 - Bernd Plannerer, Tobias Einsele, Martin Beham, Günther Ruske:

A continuous speech recognition system integrating additional acoustic knowledge sources in a data-driven beam search algorithm. 17-20 - Michael K. Brown, Bruce Buntschuh:

A context-free grammar compiler for speech understanding systems. 21-24 - Katashi Nagao, Kôiti Hasida, Takashi Miyata:

Probabilistic constraint for integrated speech and language processing. 25-28 - William H. Edmondson, Jon P. Iles:

A non-linear architecture for speech and natural language processing. 29-32
Articulatory Motion
- Donna Erickson, Kevin A. Lenzo, Masashi Sawada:

Manifestations of contrastive emphasis in jaw movement in dialogue. 33-36 - Sook-Hyang Lee, Mary E. Beckman, Michel Jackson:

Jaw targets for strident fricatives. 37-40 - David J. Ostry, Eric Vatikiotis-Bateson:

Jaw motions in speech are controlled in (at least) three degrees of freedom. 41-44 - Mark K. Tiede, Eric Vatikiotis-Bateson:

Extracting articulator movement parameters from a videodisc-based cineradiographic database. 45-48 - Maureen L. Stone, Andrew J. Lundberg:

Tongue-palate interactions in consonants vs. vowels. 49-52 - Philip Hoole, Christine Mooshammer

, Hans G. Tillmann:
Kinematic analysis of vowel production in German. 53-56 - Sarah Hawkins, Andrew Slater:

Spread of CV and v-to-v coarticulation in british English: implications for the intelligibility of synthetic speech. 57-60 - Mariko Kondo:

Mechanisms of vowel devoicing in Japanese. 61-64
Cognitive Models for Spoken Language Processing
- Peter W. Jusczyk:

The development of word recognition. 65-70 - Dennis Norris, James M. McQueen, Anne Cutler:

Competition and segmentation in spoken word recognition. 71-74
Semantic Interpretation of Spoken Messages
- Roland Kuhn, Renato de Mori:

Recent results in automatic learning rules for semantic interpretation. 75-78 - Allen L. Gorin:

Semantic associations, acoustic metrics and adaptive language acquisition. 79-82 - Wayne H. Ward:

Extracting information in spontaneous speech. 83-86 - Megumi Kameyama, Isao Arima:

Coping with aboutness complexity in information extraction from spoken dialogues. 87-90 - Otoya Shirotsuka, Ken'ya Murakami:

An example-based approach to semantic information extraction from Japanese spontaneous speech. 91-94 - Akito Nagai, Yasushi Ishikawa, Kunio Nakajima:

A semantic interpretation based on detecting concepts for spontaneous speech understanding. 95-98 - Akira Shimazu, Kiyoshi Kogure, Mikio Nakano:

Cooperative distributed processing for understanding dialogue utterances. 99-102 - Michio Okada, Satoshi Kurihara, Ryohei Nakatsu:

Incremental elaboration in generating and interpreting spontaneous speech. 103-106 - Wieland Eckert, Heinrich Niemann:

Semantic analysis in a robust spoken dialog system. 107-110 - Hiroshi Kanazawa, Shigenobu Seto, Hideki Hashimoto, Hideaki Shinchi, Yoichi Takebayashi:

A user-initiated dialogue model and its implementation for spontaneous human-computer interaction. 111-114
Prosody
- Andreas Kießling, Ralf Kompe, Anton Batliner, Heinrich Niemann, Elmar Nöth:

Automatic labeling of phrase accents in German. 115-118 - Kikuo Maekawa:

Intonational structure of kumamoto Japanese: a perceptual validation. 119-122 - John F. Pitrelli, Mary E. Beckman, Julia Hirschberg:

Evaluation of prosodic transcription labeling reliability in the tobi framework. 123-126 - Neil P. McAngus Todd, Guy J. Brown:

A computational model of prosody perception. 127-130 - Kuniko Kakita:

Inter-speaker interaction in speech rhythm: some durational properties of sentences and intersentence intervals. 131-134 - Bertil Lyberg, Barbro Ekholm:

The final lengthening phenomenon in Swedish - a consequence of default sentence accent? 135-138 - Dawn M. Behne, Bente Moxness:

Concurrent effects of focal stress, postvocalic voicing and distinctive vowel length on syllable-internal timing in norwegian. 139-142 - Kazuyuki Takagi, Shuichi Itahashi:

Prosodic pattern of utterance units in Japanese spoken dialogs. 143-146 - Akira Ichikawa, Shinji Sato:

Some prosodical characteristics in spontaneous spoken dialogue. 147-150
Towards Natural Sounding Synthetic Speech
- Inger Karlsson, Johan Liljencrants:

Wrestling the two-mass model to conform with real glottal wave forms. 151-154 - Helmer Strik, Lou Boves:

Automatic estimation of voice source parameters. 155-158 - Wen Ding, Hideki Kasuya, Shuichi Adachi:

Simultaneous estimation of vocal tract and voice source parameters with application to speech synthesis. 159-162 - Pierre Badin, Christine H. Shadle, Y. Pham Thi Ngoc, John N. Carter, W. S. C. Chiu, Celia Scully, K. Stromberg:

Frication and aspiration noise sources: contribution of experimental data to articulatory synthesis. 163-166 - Nobuhiro Miki, Pierre Badin, Y. Pham Thi Ngoc, Yoshihiko Ogawa:

Vocal tract model and 3-dimensional effect of articulation. 167-170 - Hisayoshi Suzuki, Jianwu Dang, Takayoshi Nakai, Akira Ishida, Hiroshi Sakakibara:

3-d FEM analysis of sound propagation in the nasal and paranasal cavities. 171-174 - Kiyoshi Honda, Hiroyuki Hirai, Jianwu Dang:

A physiological model of speech production and the implication of tongue-larynx interaction. 175-178 - Masaaki Honda, Tokihiko Kaburagi:

A dynamical articulatory model using potential task representation. 179-182 - Kenneth N. Stevens, Corine A. Bickley, David R. Williams:

Control of a klatt synthesizer by articulatory parameters. 183-186
Statistical Methods for Speech Recognition
- Nobuaki Minematsu, Keikichi Hirose:

Speech recognition using HMM with decreased intra-group variation in the temporal structure. 187-190 - Yukihiro Osaka, Shozo Makino, Toshio Sone:

Spoken word recognition using phoneme duration information estimated from speaking rate of input speech. 191-194 - Yumi Wakita, Eiichi Tsuboka:

State duration constraint using syllable duration for speech recognition. 195-198 - Satoru Hayamizu, Kazuyo Tanaka:

Statistical modeling and recognition of rhythm in speech. 199-202 - Xinhui Hu, Keikichi Hirose:

Recognition of Chinese tones in monosyllabic and disyllabic speech using HMM. 203-206 - Jun Wu, Zuoying Wang, Jiasong Sun, Jin Guo:

Chinese speech understanding and spelling-word translation based on the statistics of corpus. 207-210 - Ren-Hua Wang, Hui Jiang:

State-codebook based quasi continuous density hidden Markov model with applications to recognition of Chinese syllables. 211-214 - Eluned S. Parris, Michael J. Carey:

Estimating linear discriminant parameters for continuous density hidden Markov models. 215-218 - Franz Wolfertstetter, Günther Ruske:

Discriminative state-weighting in hidden Markov models. 219-222 - Takao Watanabe, Koichi Shinoda, Keizaburo Takagi, Eiko Yamada:

Speech recognition using tree-structured probability density function. 223-226 - David B. Roe, Michael D. Riley:

Prediction of word confusabilities for speech recognition. 227-230 - Li Zhao, Hideyuki Suzuki, Seiichi Nakagawa:

A comparison study of output probability functions in HMMs through spoken digit recognition. 231-234 - Tomio Takara, Naoto Matayoshi, Kazuya Higa:

Connected spoken word recognition using a many-state Markov model. 235-238 - Finn Tore Johansen:

Global optimisation of HMM input transformations. 239-242 - Don X. Sun, Li Deng:

Nonstationary-state hidden Markov model with state-dependent time warping: application to speech recognition. 243-246 - Jean-François Mari, Jean Paul Haton:

Automatic word recognition based on second-order hidden Markov models. 247-250 - Xixian Chen, Yinong Li, Xiaoming Ma, Lie Zhang:

On the application of multiple transition branch hidden Markov models to Chinese digit recognition. 251-254 - Mark J. F. Gales, Steve J. Young:

Parallel model combination on a noise corrupted resource management task. 255-258 - Jean-Baptiste Puel, Régine André-Obrecht:

Robust signal preprocessing for HMM speech recognition in adverse conditions. 259-262 - Masaharu Katoh, Masaki Kohda:

A study on viterbi best-first search for isolated word recognition using duration-controlled HMM. 263-266 - Satoshi Takahashi, Yasuhiro Minami, Kiyohiro Shikano:

An HMM duration control algorithm with a low computational cost. 267-270 - Peter Beyerlein:

Fast log-likelihood computation for mixture densities in a high-dimensional feature space. 271-274 - Nick Cremelie, Jean-Pierre Martens:

Time synchronous heuristic search in a stochastic segment based recognizer. 275-278 - Maria-Barbara Wesenick, Florian Schiel:

Applying speech verification to a large data base of German to obtain a statistical survey about rules of pronunciation. 279-282 - Denis Jouvet, Katarina Bartkova, A. Stouff:

Structure of allophonic models and reliable estimation of the contextual parameters. 283-286 - Christoph Windheuser, Frédéric Bimbot, Patrick Haffner:

A probabilistic framework for word recognition using phonetic features. 287-290 - Mohamed Afify, Yifan Gong, Jean Paul Haton:

Nonlinear time alignment in stochastic trajectory models for speech recognition. 291-294 - David M. Lubensky, Ayman Asadi, Jayant M. Naik:

Connected digit recognition using connectionist probability estimators and mixture-Gaussian densities. 295-298 - Kazuya Takeda, Tetsunori Murakami, Shingo Kuroiwa, Seiichi Yamamoto:

A trellis-based implementation of minimum error rate training. 299-302 - Me Yi:

Concatenated training of subword HMMs using detected labels. 303-306 - Chih-Heng Lin, Pao-Chung Chang, Chien-Hsing Wu:

An initial study on speaker adaptation for Mandarin syllable recognition with minimum error discriminative training. 307-310
Phonetics & Phonology I, II
- Yuko Kondo:

Phonetic underspecification in schwa. 311-314 - Shin'ichi Tanaka, Haruo Kubozono:

Some remarks on the compound accent rule in Japanese. 315-318 - Rodmonga K. Potapova:

Modifications of acoustic features in Russian connected speech. 319-322 - Sun-Ah Jun, Mira Oh:

A prosodic analysis of three sentence types with "WH" words in Korean. 323-326 - Kazue Hata, Heather Moran, Steve Pearson:

Distinguishing the voiceless fricatives f and TH in English: a study of relevant acoustic properties. 327-330 - Kenzo Itoh:

Correlation analysis between speech power and pitch frequency for twenty spoken languages. 331-334 - Jongho Jun:

On gestural reduction and gestural overlap in Korean and English /PK/ clusters. 335-338 - Carlos Gussenhoven, Toni C. M. Rietveld:

Intonation contours and the prominence of F0 peaks. 339-342 - Agnès Belotel-Grenié, Michel Grenié:

Phonation types analysis in standard Chinese. 343-346 - Mitsuru Nakai, Hiroshi Shimodaira:

Accent phrase segmentation by finding n-best sequences of pitch pattern templates. 347-350 - Bruce L. Derwing, Terrance M. Nearey:

Sound similarity judgments and segment prominence: a cross-linguistic study. 351-354 - Hiroya Fujisaki, Sumio Ohno, Kei-ichi Nakamura, Miguelina Guirao, Jorge A. Gurlekian:

Analysis of accent and intonation in Spanish based on a quantitative model. 355-358 - Edda Farnetani, Maria Grazia Busà:

Italian clusters in continuous speech. 359-362 - Cynthia Grover, Jacques M. B. Terken:

Rhythmic constraints in durational control. 363-366 - Kazutaka Kurisu:

Further evidence for bi-moraic foot in Japanese. 367-370 - Yuji Sagawa, Masahiro Ito, Noboru Ohnishi, Noboru Sugie:

A model for generating self-repairs. 371-374 - Christopher Cleirigh, Julie Vonwiller:

Accent identification with a view to assisting recognition (work in progress). 375-378 - K. Nagamma Reddy:

Phonetic, phonological, morpho-syntactic and semantic functons of segmental duration in spoken telugu: acoustic evidence. 379-382 - Zita McRobbie-Utasi:

Timing strategies within the paragraph. 383-386 - Sotaro Sekimoto:

The effect of the following vowel on the frequency normalization in the perception of voiceless stop consonants. 387-390 - Toshiko Muranaka, Noriyo Hara:

Features of prominent particles in Japanese discourse, frequency, functions and acoustic features. 395-398 - Shuping Ran, J. Bruce Millar, Iain MacLeod:

Vowel quality assessment based on analysis of distinctive features. 399-402 - Cristina Delogu, Stella Conte, Ciro Sementina:

Differences in the fluctuation of attention during the listening of natural and synthetic passages. 403-406 - Barbara Heuft, Thomas Portele:

Production and perception of words with identical segmental structure but different number of syllables. 407-410 - Caroline B. Huang, Mark A. Son-Bell, David M. Baggett:

Generation of pronunciations from orthographies using transformation-based error-driven learning. 411-414 - Hidenori Usuki, Jouji Suzuki, Tetsuya Shimamura:

Characteristics of mispronunciation and hesitation in Japanese tongue twister. 415-418 - Jean-Claude Junqua:

A duration study of speech vowels produced in noise. 419-422 - Bert Van Coile, Luc Van Tichelen, Annemie Vorstermans, J. W. Jang, M. Staessen:

PROTRAN: a prosody transplantation tool for text-to-speech applications. 423-426 - Klaus J. Kohler:

Complementary phonology a theoretical frame for labelling an acoustic data base of dialogues. 427-430 - Sun-Ah Jun, Mary E. Beckman:

Distribution of devoiced high vowels in Korean. 479-482 - Yeo Bom Yoon:

CV as a phonological unit in Korean. 483-486 - Manjari Ohala:

Experiments on the syllable in hindi. 487-490 - John J. Ohala:

Towards a universal, phonetically-based, theory of vowel harmony. 491-494 - John Ingram, Tom Mylne:

Perceptual parsing of nasal vowels. 495-498 - Oded Ghitza, M. Mohan Sondhi:

On the perceptual distance between speech segments. 499-502 - Masato Akagi, Astrid van Wieringen, Louis C. W. Pols:

Perception of central vowel with pre- and post-anchors. 503-506 - Mario Rossi, Evelyne Peter-Defare, Regine Vial:

Phonological mechanisms of French speech errors. 507-510 - Mukhlis Abu-Bakar, Nick Chater:

Phonetic prototypes: modelling the effects of speaking rate on the internal structure of a voiceless category using recurrent neural networks. 511-514 - William J. Hardcastle:

EPG and acoustic study of some connected speech processes. 515-518 - Osamu Fujimura:

Syllable timing computation in the c/d model. 519-522 - Tatiana Slama-Cazacu:

Contribution of psycholinguistic perspective for speech technologies. 523-526
Adaption and Training for Speech Recognition
- Yutaka Tsurumi, Seiichi Nakagawa:

An unsupervised speaker adaptation method for continuous parameter HMM by maximum a posteriori probability estimation. 431-434 - Koichi Shinoda, Takao Watanabe:

Unsupervised speaker adaptation for speech recognition using demi-syllable HMM. 435-438 - Wu Chou, C.-E. Lee, Biing-Hwang Juang:

Minimum error rate training of inter-word context dependent acoustic model units in speech recognition. 439-442 - Jia-Lin Shen, Hsin-Min Wang, Ren-Yuan Lyu, Lin-Shan Lee:

Incremental speaker adaptation using phonetically balanced training sentences for Mandarin syllable recognition based on segmental probability models. 443-446 - Lorenzo Fissore, Giorgio Micca, Franco Ravera:

Incremental training of a speech recognizer for voice dialling-by-name. 447-450 - C. J. Leggetter, Philip C. Woodland:

Speaker adaptation of continuous density HMMs using multivariate linear regression. 451-454 - Kazumi Ohkura, Hiroki Ohnishi, Masayuki Iida:

Speaker adaptation based on transfer vectors of multiple reference speakers. 455-458 - Nikko Strom:

Experiments with a new algorithm for fast speaker adaptation. 459-462 - Tung-Hui Chiang, Yi-Chung Lin, Keh-Yih Su:

A study of applying adaptive learning to a multi-module system. 463-466 - Jun'ichi Nakahashi, Eiichi Tsuboka:

Speaker adaptation based on fuzzy vector quantization. 467-470 - Myung-Kwang Kong, Seong-Kwon Lee, Soon-Hyob Kim:

A study on the simulated annealing of self organized map algorithm for Korean phoneme recognition. 471-474 - Celinda de la Torre, Alejandro Acero:

Discriminative training of garbage model for non-vocabulary utterance rejection. 475-478
Science and Technology for Multimodal Interfaces
- Eric Vatikiotis-Bateson, Inge-Marie Eigsti, Sumio Yano:

Listener eye movement behavior during audiovisual speech perception. 527-530 - Dominic W. Massaro, Michael M. Cohen:

Auditory/visual speech in multimodal human interfaces. 531-534 - Tadahisa Kondo, Kazuhiko Kakehi:

Effects of phonological and semantic information of kanji and kana characters on speech perception. 535-538 - Patricia K. Kuhl, Minoru Tsuzaki, Yoh'ichi Tohkura, Andrew N. Meltzoff:

Human processing of auditory-visual information in speech perception: potential for multimodal human-machine interfaces. 539-542 - Alex Pentland, Trevor Darrell:

Visual perception of human bodies and faces for multi-modal interfaces. 543-546 - Paul Duchnowski, Uwe Meier, Alex Waibel:

See me, hear me: integrating automatic speech recognition and lip-reading. 547-550 - Sharon L. Oviatt, Erik Olsen:

Integration themes in multimodal human-computer interaction. 551-554 - David A. Berkley, James L. Flanagan, Kathleen L. Shipley, Lawrence R. Rabiner:

A multimodal teleconferencing system using hands-free voice control. 555-558 - Paul Bertelson, Jean Vroomen, Geert Wiegeraad, Béatrice de Gelder:

Exploring the relation between mcgurk interference and ventriloquism. 559-562 - Jean-Claude Junqua, Philippe Morin:

Naturalness of the interaction in multimodal applications. 563-566 - Haru Ando, Yoshinori Kitahara, Nobuo Hataoka:

Evaluation of multimodal interface using spoken language and pointing gesture on interior design system. 567-570 - Kyung-ho Loken-Kim, Fumihiro Yato, Laurel Fais, Tsuyoshi Morimoto, Akira Kurematsu:

Linguistic and paralinguistic differences between multimodal and telephone-only dialogues. 571-574
Measurements and Models of Speech Production
- Richard C. Rose, Juergen Schroeter, Man Mohan Sondhi:

An investigation of the potential role of speech production models in automatic speech recognition. 575-578 - Tokihiko Kaburagi, Masaaki Honda:

A trajectory formation model of articulatory movements based on the motor tasks of phoneme-specific vocal tract shapes. 579-582 - Martine George, Paul Jospa, Alain Soquet:

Articulatory trajectories generated by the control of the vocal tract by a neural network. 583-586 - Makoto Hirayama, Eric Vatikiotis-Bateson, Vincent L. Gracco, Mitsuo Kawato:

Neural network prediction of lip shape from muscle EMG in Japanese speech. 587-590 - Masahiro Hiraike, Shigehisa Shimizu, Takao Mizutani, Kiyoshi Hashimoto:

Estimation of the lateral shape of a tongue from speech. 591-594 - Paul Jospa, Alain Soquet:

The acoustic-articulatory mapping and the variational method. 595-598 - Xavier Pelorson, T. Lallouache, S. Tourret, C. Bouffartigue, Pierre Badin:

Aerodynamical, geometrical and mechanical aspects of bilabial plosives production. 599-602 - Jianwu Dang, Kiyoshi Honda:

Investigation of the acoustic characteristics of the velum for vowels. 603-606 - Kunitoshi Motoki, Pierre Badin, Nobuhiro Miki:

Measurement of acoustic impedance density distribution in the near field of the labial horn. 607-610 - Jean Schoentgen, Sorin Ciocea:

Explicit relations between resonance frequencies and vocal tract cross sections in loss-less kelly-lochbaum and distinctive region vocal tract models. 611-614 - Vesa Välimäki, Matti Karjalainen:

Improving the kelly-lochbaum vocal tract model using conical tube sections and fractional delay filtering techniques. 615-618 - Masafumi Matsumura, Takuya Nukawa, Koji Shimizu, Yasuji Hashimoto, Tatsuya Morita:

Measurement of 3d shapes of vocal tract, dental crown and nasal cavity using MRI: vowels and fricatives. 619-622 - Chang-Sheng Yang, Hideki Kasuya:

Accurate measurement of vocal tract shapes from magnetic resonance images of child, female and male subjects. 623-626 - Shrikanth S. Narayanan, Abeer Alwan, Katherine Haker:

An MRI study of fricative consonants. 627-630 - Eric Vatikiotis-Bateson, Mark K. Tiede, Yasuhiro Wada, Vincent L. Gracco, Mitsuo Kawato:

Phoneme extraction using via point estimation of real speech. 631-634 - Hiroki Matsuzaki, Nobuhiro Miki, Nobuo Nagai, Tohru Hirohku, Yoshihiko Ogawa:

3d FEM analysis of vocal tract model of elliptic tube with inhomogeneous-wall impedance. 635-638 - Yuki Kakita, Hitoshi Okamoto:

Chaotic characteristics of voice fluctuation and its model explanation: normal and pathological voices. 639-642 - Tadashige Ikeda, Yuji Matsuzaki:

Flow theory for analysis of phonation with a membrane model of vocal cord. 643-647 - B. Craig Dickson, John H. Esling, Roy C. Snell:

Real-time processing of electroglottographic waveforms for the evaluation of phonation types. 647-650 - Donna Erickson, Kiyoshi Honda, Hiroyuki Hirai, Mary E. Beckman, Seiji Niimi:

Global pitch range and the production of low tones in English intonation. 651-654 - Masafumi Matsumura, Kazuo Kimura, Katsumi Yoshino, Takashi Tachimura, Takeshi Wada:

Measument of palatolingual contact pressure during consonant productions using strain gauge transducer mounted platal plate. 655-658 - Kohichi Ogata, Yorinobu Sonoda:

A study of sensor arrangements for detecting movements and inclinations of tongue point during speech. 659-662 - Shinobu Masaki, Kiyoshi Honda:

Estimation of temporal processing unit of speech motor programming for Japanese words based on the measurement of reaction time. 663-666
Applications of Spoken Language Processing
- Jay G. Wilpon, David B. Roe:

Applications of speech recognition technology in telecommunications. 667-670 - Tsuneo Nitta:

Speech recognition applications in Japan. 671-674 - Tomohisa Hirokawa:

Trends in the applications of and market for speech synthesis technology. 675-678 - Baruch Mazor, Jerome Braun, Bonnie Zeigler, Solomon Lerner, Ming-Whei Feng, Han Zhou:

OASIS - a speech recognition system for telephone service orders. 679-682 - Ronald A. Cole, David G. Novick, Mark A. Fanty, Pieter J. E. Vermeulen, Stephen Sutton, Daniel C. Burnett, Johan Schalkwyk:

A prototype voice-response questionnaire for the u.s. census. 683-686 - Toshiaki Tsuboi, Shigeru Homma, Shoichi Matsunaga:

A speech-to-text transcription system for medical diagnoses. 687-690 - Marc Dymetman, Julie Brousseau, George F. Foster, Pierre Isabelle, Yves Normandin, Pierre Plamondon:

Towards an automatic dictation system for translators : the transtalk project. 691-694 - Kamil A. Grajski, Kurt Rodarmer:

Real-time, speaker-independent, continuous Spanish speech recognition for personal computer desktop command & control. 695-698 - Jun Noguchi, Shinsuke Sakai, Kaichiro Hatazaki, Ken-ichi Iso, Takao Watanabe:

An automatic voice dialing system developed on PC speech i/o platform. 699-702 - Martin Oerder, Harald Aust:

A realtime prototype of an automatic inquiry system. 703-706 - David Goddeau, Eric Brill, James R. Glass, Christine Pao, Michael S. Phillips, Joseph Polifroni, Stephanie Seneff, Victor W. Zue:

GALAXY: a human-language interface to on-line travel information. 707-710
Speech Synthesis I, II
- Merle Horne, Marcus Filipsson:

Generating prosodic structure for Swedish text-to-speech. 711-714 - Alan W. Black, Paul Taylor:

Assigning intonation elements and prosodic phrasing for English speech synthesis from high level linguistic input. 715-718 - Jan P. H. van Santen, Julia Hirschberg:

Segmental effects on timing and height of pitch contours. 719-722 - Toshiaki Fukada, Yasuhiro Komori, Takashi Aso, Yasunori Ohora:

A study on pitch pattern generation using HMM-based statistical information. 723-726 - Olivier Boëffard, Fábio Violaro:

Using a hybrid model in a text-to-sppech system to enlarge prosodic modifications. 727-730 - Akio Ando, Eiichi Miyasaka:

A new method for estimating Japanese speech rate. 731-734 - Emmy M. Konst, Lou Boves:

Automatic grapheme-to-phoneme conversion of dutch names. 735-738 - Briony Williams:

Diphone synthesis for the welsh language. 739-742 - Shinichi Doi, Kazuhiko Iwata, Kazunori Muraki, Yukio Mitome:

Pause control in Japanese text-to-speech conversion system with lexical discourse grammar. 743-746 - Naohiro Sakurai, Takerni Mochida, Tetsunori Kobayashi, Katsuhiko Shirai:

Generation of prosody in speech synthesis using large speech data-base. 747-750 - Niels-Jørn Dyhr, Marianne Elmlund, Carsten Henriksen:

Preserving naturalness in synthetic voices while minimizing variation in formant frequencies and bandwidths. 751-754 - Kazuhiro Takahashi, Kazuhiko Iwata, Yukio Mitome, Keiko Nagano:

Japanese text-to-speech conversion software for personal computers. 1743-1746 - Annemie Vorstermans, Jean-Pierre Martens:

Automatic labeling of speech synthesis corpora. 1747-1750 - Yasushi Ishikawa, Kunio Nakajima:

On synthesis units for Japanese text-to-speech synthesis. 1751-1754 - Judith L. Klavans, Evelyne Tzoukermann:

Inducing concatenative units from machine readable dictionaries and corpora for speech synthesis. 1755-1758 - Thomas Portele, Florian Höfer, Wolfgang J. Hess:

Structure and representation of an inventory for German speech synthesis. 1759-1762 - Anne Lacheret-Dujour, Vincent Pean:

Towards a prosodic cues-based modelling of phonological variability for text-to-speech synthesis. 1763-1766 - Isabel Trancoso, Céu Viana, Fernando M. Silva, Goncalo C. Marques, Luís C. Oliveira:

Rule-based vs neural network-based approaches to letter-to-phone conversion for portuguese common and proper names. 1767-1770 - Benjamin Ao, Chilin Shih, Richard Sproat:

A corpus-based Mandarin text-to-speech synthesizer. 1771-1774 - Kazuo Hakoda, Tomohisa Hirokawa, Kenzo Itoh:

Speech editor based on enhanced user-system interaction for high quality text-to-speech synthesis. 1775-1778 - Mats Ljungqvist, Anders Lindström, Kjell Gustafson:

A new system for text-to-speech conversion, and its application to Swedish. 1779-1782 - Yoshinori Shiga, Yoshiyuki Hara, Tsuneo Nitta:

A novel segment-concatenation algorithm for a cepstrum-based synthesizer. 1783-1786 - Florien J. Koopmans-van Beinum, Louis C. W. Pols:

Naturalness and intelligibility of rule-synthesized speech, supplied with specific spectro-temporal features derived from natural continuous speech. 1787-1790
New Approach for Brain Function Research in Speech Perception and Production/
- Karalyn Patterson, Karen Croot, John R. Hodges:

Speech production: Insights from a study of progressive aphasia. 755-758 - Makoto Iwata, Yasuhisa Sakurai, Toshimitsu Momose:

Functional mapping of cerebral mechanism of reading in the Japanese language. 759-762 - Dana F. Boatman, Ronald P. Lesser, Barry Gordon:

Cortical representation of speech perception and production, as revealed by direct cortical electrical interference. 763-766 - Michael D. Rugg, Catherine J. C. Cox, Michael C. Doyle:

Investigating word recognition and language comprehension with event-related brain potentials. 767-780 - Sue Franklin, Julie Morris, Judy Turner:

Dissociations in word deafness. 771-774 - Akira Uno, Jun Tanemura, Koichi Higo:

Recovery mechanism of naming disorders in aphasic patients: effects of different training modalities. 775-778
Language Modeling for Speech Recognition
- Michael K. Brown, Stephen C. Glinski:

Stochastic context-free language modeling with evolutional grammars. 779-782 - Nigel Ward:

A lightweight parser for speech understanding. 783-786 - Takeshi Kawabata:

Dynamic probabilistic grammar for spoken language disambiguation. 787-790 - Kouichi Yamaguchi, Harald Singer, Shoichi Matsunaga, Shigeki Sagayama:

Speaker-consistent parsing for speaker-independent continuous speech recognition. 791-794 - Masaaki Nagata:

A stochastic morphological analyzer for spontaneously spoken languages. 795-798 - Jean-Yves Antoine, Jean Caelen, Bertrand Caillaud:

Automatic adaptive understanding of spoken language by cooperation of syntactic parsing and semantic priming. 799-802 - Adwait Ratnaparkhi, Salim Roukos, Todd Ward:

A maximum entropy model for parsing. 803-806 - Jiro Kiyama, Yoshiaki Itoh, Ryuichi Oka:

Sentence spotting using continuous structuring method. 807-810 - Hiroyuki Sakamoto, Shoichi Matsunaga:

Continuous speech recognition using a dialog-conditioned stochastic language model. 811-814 - Tatsuya Kawahara, Toshihiko Munetsugu, Norihide Kitaoka, Shuji Doshita:

Keyword and phrase spotting with heuristic language model. 815-818 - Jin'ichi Murakami, Shoichi Matsunaga:

A spontaneous speech recognition algorithm using word trigram models and filled-pause procedure. 819-822 - Masayuki Yamada, Yasuhiro Komori, Yasunori Ohora:

Active/non-active word control using garbage model, unknown word re-evaluation in speech conversation. 823-826 - Lin Lawrence Chase, Ronald Rosenfeld

, Wayne H. Ward:
Error-responsive modifications to speech recognizers: negative n-grams. 827-830 - Bernhard Suhm, Alex Waibel:

Towards better language models for spontaneous speech. 831-834 - Michael K. McCandless, James R. Glass:

Empirical acquisition of language models for speech recognition. 835-838 - Shigeru Fujio, Yoshinori Sagisaka, Norio Higuchi:

Prediction of prosodic phrase boundaries using stochastic context-free grammar. 839-842 - Egidio P. Giachin, Paolo Baggia, Giorgio Micca:

Language models for spontaneous speech recognition: a bootstrap method for learning phrase digrams. 843-846 - Monika Woszczyna, Alex Waibel:

Inferring linguistic structure in spoken language. 847-850 - Germán Bordel, M. Inés Torres, Enrique Vidal:

Back-off smoothing in a syntactic approach to language modelling. 851-854 - H.-H. Shih, Steve J. Young:

Computer assisted grammar construction. 855-858 - Giuliano Antoniol, Fabio Brugnara, Mauro Cettolo, Marcello Federico:

Language model estimations and representations for real-time continuous speech recognition. 859-862 - Bruno Jacob, Régine André-Obrecht:

Sub-dictionary statistical modeling for isolated word recognition. 863-866 - Michèle Jardino:

A class bigram model for very large corpus. 867-870
Models and Systems for Spoken Dialogue
- Akio Amano, Toshiyuki Odaka:

A spoken dialogue system based on hierarchical feedback mechanism. 871-874 - Niels Ole Bernsen, Laila Dybkjær, Hans Dybkjær:

A dedicated task-oriented dialogue theory in support of spoken language dialogue systems design. 875-878 - Farzad Ehsani, Kaichiro Hatazaki, Jun Noguchi, Takao Watanabe:

Interactive speech dialogue system using simultaneous understanding. 879-882 - Masahiro Araki, Taro Watanabe, Felix C. M. Quimbo, Shuji Doshita:

A cooperative man-machine dialogue model for problem solving. 883-886 - Osamu Yoshioka, Yasuhiro Minami, Kiyohiro Shikano:

A multi-modal dialogue system for telephone directory assistance. 887-890 - Mark Terry, Randall Sparks, Patrick Obenchain:

Automated query identification in English dialogue. 891-894 - Keiichi Sakai, Yuji Ikeda, Minoru Fujita:

Robust discourse processing considering misrecognition in spoken dialogue system. 895-898 - Keiko Watanuki, Kenji Sakamoto, Fumio Togawa:

Analysis of multimodal interaction data in human communication. 899-902 - Kazuhiro Arai:

Changes in user's responses with use of a speech dialog system. 903-906 - Katunobu Itou, Tomoyosi Akiba, Osamu Hasegawa, Satoru Hayamizu, Kazuyo Tanaka:

Collecting and analyzing nonverbal elements for maintenance of dialog using a wizard of oz simulation. 907-910 - Giovanni Flammia, James R. Glass, Michael S. Phillips, Joseph Polifroni, Stephanie Seneff, Victor W. Zue:

Porting the bilingual voyager system to Italian. 911-914 - Gen-ichiro Kikui, Tsuyoshi Morimoto:

Similarity-based identification of repairs in Japanese spoken language. 915-918 - Lars Bo Larsen, Anders Baekgaard:

Rapid prototyping of a dialogue system using a generic dialogue development platform. 919-922 - Shozo Naito, Akira Shimazu:

Heuristics for generating acoustic stress in dialogues and examination of their validity. 923-926 - Jacques Siroux, Mouloud Kharoune, Marc Guyomard:

Application and dialogue in the sundial system. 927-930 - Shin-ichiro Kamei, Shinichi Doi, Takako Komatsu, Susumu Akamine, Hitoshi Iida, Kazunori Muraki:

A dialog analysis using information of the previous sentence. 931-934 - Kiyoshi Kogure, Akira Shimazu, Mikio Nakano:

Recognizing plans in more natural dialogue utterances. 935-938 - Bernd Hildebrandt, Gernot A. Fink, Franz Kummert, Gerhard Sagerer:

Understanding of time constituents in spoken language dialogues. 939-942 - Tadahiko Kumamoto, Akira Ito, Tsuyoshi Ebina:

An analysis of Japanese sentences in spoken dialogue and its application to communicative intention recognition. 943-946 - Beth Ann Hockey:

Extra propositional focus and belief revision. 947-950 - Daniel Schang, Laurent Romary:

Frames, a unified model for the representation of reference and space in a man-machine dialogue. 951-954 - Masahito Kawamori, Akira Shimazu, Kiyoshi Kogure:

Roles of interjectory utterances in spoken discourse. 955-958 - Yukiko Ishikawa:

Communicative mode dependent contribution from the recipient in information providing dialogue. 959-962 - Alain Cozannet, Jacques Siroux:

Strategies for oral dialogue control. 963-966 - Astrid Brietzmann, Fritz Class, Ute Ehrlich, Paul Heisterkamp, Alfred Kaltenmeier, Klaus Mecklenburg, Peter Regel-Brietzmann:

Robust speech understanding. 967-970 - Yoichi Yamashita, Keiichi Tajima, Yasuo Nomura, Riichiro Mizoguchi:

Dialog context dependencies of utterances generated from concept reperesentation. 971-974 - Shu Nakazato, Katsuhiko Shirai:

Effects on utterances caused by knowledge on the hearer. 975-978 - Alexandre Ferrieux, M. David Sadek:

An efficient data-driven model for cooperative spoken dialogue. 979-982 - James R. Glass, Joseph Polifroni, Stephanie Seneff:

Multilingual language generation across multiple domains. 983-986
Speech Recognition in Adverse Environments
- Chafic Mokbel, R. Paches-Leal, Denis Jouvet, Jean Monné:

Compensation of telephone line effects for robust speech recognition. 987-990 - Jun-ichi Takahashi, Shigeki Sagayama:

Telephone line characteristic adaptation using vector field smoothing technique. 991-994 - Jane Chang, Victor W. Zue:

A study of speech recognition system robustness to microphone variations: experiments in phonetic classification. 995-998 - Tadashi Suzuki, Kunio Nakajima, Yoshiharu Abe:

Isolated word recognition using models for acoustic phonetic variability by lombard effect. 999-1002 - John H. L. Hansen, Brian D. Womack, Levent M. Arslan:

A source generator based production model for environmental robustness in speech recognition. 1003-1006 - Hiroshi Matsumoto, Hiroyuki Imose:

A frequency-weighted continuous density HMM for noisy speech recognition. 1007-1010 - Lee-Min Lee, Hsiao-Chuan Wang:

A study on adaptations of cepstral and delta cepstral coefficients for noisy speech recognition. 1011-1014 - Kuldip K. Paliwal, Bishnu S. Atal:

A comparative study of feature representations for robust speech recognition in adverse environments. 1015-1018 - Hugo Van hamme:

ARDOSS: autoregressive domain spectral subtraction for robust speech recognition in additive noise. 1019-1022 - Keizaburo Takagi, Hiroaki Hattori, Takao Watanabe:

Speech recognition with rapid environment adaptation by spectrum equalization. 1023-1026 - Richard M. Stern, Fu-Hua Liu, Pedro J. Moreno, Alejandro Acero:

Signal processing for robust speech recognition. 1027-1030 - Olivier Siohan, Yifan Gong, Jean Paul Haton:

A comparison of three noisy speech recognition approaches. 1031-1034
Speech Analysis
- Douglas A. Cairns, John H. L. Hansen:

Nonlinear speech analysis using the teager energy operator with application to speech classification under stress. 1035-1038 - Paul A. Moakes, Steve W. Beet:

Analysis of non-linear speech generating dynamics. 1039-1042 - Keiichi Tokuda, Takao Kobayashi, Takashi Masuko, Satoshi Imai:

Mel-generalized cepstral analysis - a unified approach to speech spectral estimation. 1043-1046 - I. R. Gransden, Steve W. Beet:

Combining auditory representations using fuzzy sets. 1047-1050 - Shoji Kajita, Fumitada Itakura:

Sbcor spectrum taking autocorrelation coefficients at integral multiples of 1/CF into account. 1051-1054 - Hema A. Murthy:

Pitch extraction from root cepstrum. 1055-1058 - Sunghoon Hong, Sangki Kang, Souguil Ann:

Voice parameter estimation using sequential SVD and wave shaping filter bank. 1059-1062 - Jean Schoentgen:

Self excited threshold auto-regressive models of the glottal pulse and the speech signal. 1063-1066 - Wolfgang J. Hess:

Determination of glottal excitation cycles for voice quality analysis. 1067-1070 - Alain de Cheveigné:

Strategies for voice separation based on harmonicity. 1071-1074 - Yukio Mitome:

Speech analysis technique for PSOLA synthesis based on complex cepstrum analysis and residual excitation. 1075-1078
Prosody of Discourse and Dialogue
- Shigeru Kiritani, Kikuo Maekawa, Hajime Hirose:

Intonation pattern with focus and related muscle activities in tokyo dialect. 1079-1082 - Jianfen Cao:

The effects of contrastive accent and lexical stress upon temporal distribution in a sentence. 1083-1086 - Henrietta J. Cedergren, Hélène Perreault:

Speech rate and syllable timing in spontaneous speech. 1087-1090 - Hyunbok Lee, Narn-taek Jin, Cheol-jae Seong, Il-jin Jung, Seung-mie Lee:

An experimental phonetic study of speech rhythm in standard Korean. 1091-1094 - Noriko Umeda, Toby Wedmore:

A rhythm theory for spontaneous speech: the role of vowel amplitude in the rhythmic hierarchy. 1095-1098 - Gösta Bruce, Björn Granström, Kjell Gustafson, David House, Paul Touati:

Modelling Swedish prosody in a dialogue framework. 1099-1102 - Hiroya Fujisaki, Sumio Ohno, Masafumi Osame, Mayumi Sakata, Keikichi Hirose:

Prosodic characteristics of a spoken dialogue for information query. 1103-1106 - Shoichi Takeda, Yoshiyuki Itoh, Norifumi Sakuma, Kei Yokosato:

Analysis of prosodic and linguistic features of spontaneous Japanese conversational speech. 1107-1110 - Nick Campbell:

Combining the use of duration and F0 in an automatic analysis of dialogue prosody. 1111-1114 - Gabriele Bakenecker, Hans Ulrich Block, Anton Batliner, Ralf Kompe, Elmar Nöth, Peter Regel-Brietzmann:

Improving parsing by incorporating 'prosodic clause boundaries into a grammar. 1115-1118 - Andrew Hunt:

A prosodic recognition module based on linear discriminant analysis. 1119-1122 - Keikichi Hirose, Atsuhiro Sakurai, Hiroyuki Konno:

Use of prosodic features in the recognition of continuous speech. 1123-1126
Spoken Language Cognition and Its Disorders
- Taeko Nakayama Wydell, Brian Butterworth:

The inconsistency of consistency effects in reading: the case of Japanese kanji phonology. 1127-1130 - Valter Ciocca, Livia Wong, Lydia K. H. So:

An acoustic analysis of unreleased stop consonants in word-final position. 1131-1134 - Jean Vroomen, Béatrice de Gelder:

Speech segmentation in dutch: no role for the syllable. 1135-1138 - James M. McQueen:

Do ambiguous fricatives rhyme? lexical involvement in phonetic decision-making depends on task demands. 1139-1142 - Pierre A. Hallé, Juan Segui:

Moraic segmentation in Japanese revisited. 1143-1146 - Jennifer J. Venditti, Hiroko Yamashita:

Prosodic information and processing of temporarily ambiguous constructions in Japanese. 1147-1150 - Nobuaki Minematsu, Keikichi Hirose:

Role of prosodic features in the human process of speech perception. 1151-1154 - Masahiro Hashimoto, Hideaki Seki:

Limitations of lip-reading advantage by desynchronizing visual and auditory information in speech. 1155-1158 - Sue Franklin, Judy Turner, Julie Morris:

Word meaning deafness: effects of word type. 1159-1162 - Mikio Masukata, Seiichi Nakagawa:

Concept and grammar acquisition based on combining with visual and auditory information. 1163-1166 - Gavin J. Dempster, Sheila M. Williams, Sandra P. Whiteside:

The punch and judy man: a study of phonological / phonetic variation. 1167-1170 - Hartmut Traunmller, Renée van Bezooijen:

The auditory perception of children's age and sex. 1171-1174 - James S. Magnuson, Reiko Akahane-Yamada, Howard C. Nusbaum:

Are representations used for talker identification available for talker normalization? 1175-1178 - Yoko Hasegawa, Kazue Hata:

Non-physiological differences between male and female speech: evidence from the delayed F0 fall phenomenon in Japanese. 1179-1182 - Tatsuya Kitamura, Masato Akagi:

Speaker individualities in speech spectral envelopes. 1183-1186 - Duncan Markham:

Prosodic imitation: productional results. 1187-1190 - Fiona Gibbon, William J. Hardcastle:

Articulatory description of affricate production in speech disordered children using electropalatography (EPG). 1191-1194 - Akira Ujihira, Haruo Kubozono:

A phonetic and phonological analysis of stuttering in Japanese. 1195-1198 - Donald G. Jamieson, Susan Rvachew:

Perception, production and training of new consonant contrasts in children with articulation disorders. 1199-1202 - Sachiko Nakakoshi, Atsushi Mizobuchi, Hiroto Katori:

Cognitive processes of speech sounds in a brain-damaged patient. 1203-1206 - N. Suzuki, H. Dent, Masahiko Wakumoto, Fiona Gibbon, Ken-ich Michi, William J. Hardcastle:

A cross-linguistic study of lateral /s/ using electropalatography (EPG). 1207-1210 - Junko Matsubara, Toshihiro Kashiwagi, Morio Kohno, Hirotaka Tanabe, Asako Kashiwagi:

Prosody of recurrent utterances in aphasic patients. 1211-1214 - Virginia LoCastro:

Intonation and language teaching. 1215-1218 - Tsuyoshi Nara, P. Bhaskararao:

A computer-aided phonetic instruction system for south-asian languages. 1219-1222 - Morio Kohno, Junko Matsubara, Katsuko Higuchi, Toshihiro Kashiwagi:

Rhythm processing by a patient with pure anarthria: some suggestions on the role of rhythm in spoken language processing. 1223-1226 - Nobuko Yamada:

Japanese accentuation of foreign learners and its interlanguage. 1227-1230 - Masato Kaneko:

Mechanisms producing recurring utterances in a patient with slowly progressive aphasia. 1231-1234 - Kiyokata Katoh, Takako Ayusawa, Yukihiro Nishinuma, Richard Harrison, Kikuko Yamashita:

Hypermedia for spoken language education. 1235-1238 - P. Bhaskararao, Venkata N. Peri, Vishwas Udpikar:

A text-to-speech system for application by visually handicapped and illiterate. 1239-1242
Spoken Language Systems and Assessments
- Diego Giuliani, Maurizio Omologo, Piergiorgio Svaizer:

Talker localization and speech recognition using a microphone array and a cross-powerspectrum phase analysis. 1243-1246 - Qiguang Lin, Ea-Ee Jan, ChiWei Che, Bert de Vries:

System of microphone arrays and neural networks for robust speech recognition in multimedia environments. 1247-1250 - Manny Rayner, David M. Carter, Patti Price, Bertil Lyberg:

Estimating performance of pipelined spoken language translation systems. 1251-1254 - Cheol-Woo Jo, Kyung-Tae Kim, Yong-Ju Lee:

Generation of multi-syllable nonsense words for the assessment of Korean text-to-speech system. 1255-1258 - Aruna Bayya, Michael Durian, Lori Meiskey, Rebecca Root, Randall Sparks, Mark Terry:

Voice map: a dialogue-based spoken language information access system. 1259-1262 - Shigenobu Seto, Kazuhiro Kimura:

Development of a document preparation system with speech command using EDR electronic dictionaries. 1263-1266 - Bianca Angelini, Giuliano Antoniol, Fabio Brugnara, Mauro Cettolo, Marcello Federico, Roberto Fiutem, Gianni Lazzari:

Radiological reporting by speech recognition: the a.re.s. system. 1267-1270 - Samir Bennacef, Hélène Bonneau-Maynard, Jean-Luc Gauvain, Lori Lamel, Wolfgang Minker:

A spoken language system for information retrieval. 1271-1274 - Børge Lindberg:

Recogniser response modelling from testing on series of minimal word pairs. 1275-1278 - Toshimitsu Minowa, Yasuhiko Arai, Hisanori Kanasashi, Tatsuya Kimura, Takuji Kawamoto:

A study on the problems for apllication of voice interface based on ford recognition. 1279-1282 - Hiroyuki Kamio, Mika Koorita, Hiroshi Matsu'ura, Masafumi Tamura, Tsuneo Nitta:

A UI design support tool for multimodal spoken dialogue system. 1283-1286 - Takuya Nishirnoto, Nobutoshi Shida, Tetsunori Kobayashi, Katsuhiko Shirai:

Multimodal drawing tool using speech, mouse and key-board. 1287-1290 - Yasuhiko Arai, Toshimitsu Minowa, Hiroko Yoshida, Hirofumi Nishimura, Hiroyvki Kamata, Takashi Honda:

Generation of non-entry words from entries of the natural speech database. 1291-1294 - Pedro Gómez-Vilda, Daniel Martinez, Victor Nieto Lluis, Victoria Rodellar:

MECALLSAT: a multimedia environment for computer-aided language learning incorporating speech assessment techniques. 1295-1298 - Arthur E. McNair, Alex Waibel:

Improving recognizer acceptance through robust, natural speech repair. 1299-1302 - David Fay:

User acceptance of automatic speech recognition in telephone services. 1303-1306 - Stephen Love, R. T. Dutton, John C. Foster, Mervyn A. Jack, F. W. M. Stentiford:

Identifying salient usability attributes for automated telephone services. 1307-1310 - Arnd Mariniak:

Word complexity measures in the context of speech intelligibility tests. 1311-1314 - Frank H. Wu, Monica A. Maries:

Recognition accuracy methods and measures. 1315-1318 - Ute Jekosch, Louis C. W. Pols:

A feature-profile for application-specific speech synthesis assessment and evaluation. 1319-1322 - Thomas Hegehofer:

A description model for speech assessment tests with subjects. 1323-1326 - Victoria Rodellar, Antonio Diaz, Jose Gallardo, Virginia Peinado, Victor Nieto Lluis, Pedro Gómez:

VLSI implementation of a robust hybrid parameter-extractor and neural network for speech decoding. 1327-1330 - Toshiro Watanabe, Shinji Hayashi:

An objective measure for qualitatively assessing low-bit-rate coded speech. 1331-1334 - Kazuhiko Ozeki:

Performance comparison of recognition systems based on the akaike information criterion. 1335-1338 - Nobutoshi Hanai, Richard M. Stern:

Robust speech recognition in the automobile. 1339-1342 - Javier Macías Guarasa, Manuel A. Leandro, José Colás, Álvaro Villegas, Santiago Aguilera, José Manuel Pardo:

On the development of a dictation machine for Spanish: DIVO. 1343-1346 - Yoshiaki Ohshima, Richard M. Stern:

Environmental robustness in automatic speech recognition using physiologic ally-motivated signal processing. 1347-1350
Large Vocabulary/Speaker Independent Speech
- V. Valtchev, J. J. Odell, Philip C. Woodland, Steve J. Young:

Recognition ********* a dynamic network decoder design for large vocabulary speech recognition. 1351-1354 - Hermann Ney, Xavier L. Aubert:

A word graph algorithm for large vocabulary, continuous speech recognition. 1355-1358 - Michael S. Phillips, David Goddeau:

Fast match for segment-based large vocabulary continuous speech recognition. 1359-1362 - Chuck Wooters, Andreas Stolcke:

Multiple-pronunciation lexical modeling in a speaker independent speech understanding system. 1363-1366 - Yves Normandin, Roxane Lacouture, Régis Cardin:

MMIE training for large vocabulary continuous speech recognition. 1367-1370 - Yen-Ju Yang, Sung-Chien Lin, Lee-Feng Chien, Keh-Jiann Chen, Lin-Shan Lee:

An intelligent and efficient word-class-based Chinese language model for Mandarin speech recognition with very large vocabulary. 1371-1374 - Tetsuo Kosaka, Shoichi Matsunaga, Shigeki Sagayama:

Tree-structured speaker clustering for speaker-independent continuous speech recognition. 1375-1378 - Tatsuya Kimura, Hiroyasu Kuwano, Akira Ishida, Taisuke Watanabe, Shoji Hiraoka:

Compact-size speaker independent speech recognizer for large vocabulary using "compats" method. 1379-1382 - Yasuyuki Masai, Jun'ichi Iwasaki, Shin'ichi Tanaka, Tsuneo Nitta, Masahiro Yao, Tomohiro Onogi, Akira Nakayama:

A keyword-spotting unit for speaker-independent spontaneous speech recognition. 1383-1386 - Myoung-Wan Koo, Sang-Kyu Park, Kyung-Tae Kong, Sam-joo Doh:

KT-stock: a speaker-independent large-vocabulary speech recognition system over the telephone. 1387-1390 - Bianca Angelini, Fabio Brugnara, Daniele Falavigna, Diego Giuliani, Roberto Gretter, Maurizio Omologo:

Speaker independent continuous speech recognition using an acoustic-phonetic Italian corpus. 1391-1394
Perception and Structure of Spoken Language
- Roy D. Patterson, Timothy R. Anderson, Michael Allerhand:

The auditory image model as a preprocessor for spoken language. 1395-1398 - Hideki Kawahara:

Effects of natural auditory feedback on fundamental frequency control. 1399-1402 - Tomohiro Nakatani, Takeshi Kawabata, Hiroshi G. Okuno:

Unified architecture for auditory scene analysis and spoken language processing. 1403-1406 - Anne Cutler, Duncan Young:

Rhythmic structure of word blends in English. 1407-1410 - Kazuhiko Kakehi, Kazumi Kato:

Perception for VCV speech uttered simultaneously or sequentially by two talkers. 1411-1414 - Shigeaki Amano:

Perception of time-compressed/expanded Japanese words depends on the number of perceived phonemes. 1415-1418 - Monique Radeau, Juan Segui, José Morais:

The effect of overlap position in phonological priming between spoken words. 1419-1422 - Masuzo Yanagida:

A cognitive model of inferring unknown words and uncertain sound sequence. 1423-1426 - Takashi Otake, Kiyoko Yoneyama:

A moraic nasal and a syllable structure in Japanese. 1427-1430 - Paula M. T. Smeele, Anne C. Sittig, Vincent J. van Heuven:

Temporal organization of bimodal speech information. 1431-1434 - Sumi Shigeno:

The use of auditory and phonetic memories in the discrimination of stop consonants under audio-visual presentation. 1435-1438
Voice Quality
- Inger Karlsson:

Controlling voice quality of synthetic speech. 1439-1442 - Louis C. W. Pols:

Voice quality of synthetic speech: representation and evaluation. 1443-1446 - Etsuko Ofuka, Hélène Valbret, Mitch G. Waterman, Nick Campbell, Peter Roach:

The role of F0 and duration in signalling affect in Japanese: anger, kindness and Politeness. 1447-1450 - Gunnar Fant, Anita Kruckenberg, Johan Liljencrants, Mats Båvegård:

Voice source parameters in continuous speech, transformation of LF-parameters. 1451-1454 - Masanobu Abe, Hideyuki Mizuno:

Speaking style conversion by changing prosodic parameters and formant frequencies. 1455-1458 - Hideki Kasuya, Xuan Tan, Chang-Sheng Yang:

Voice source and vocal tract characteristics associated with speaker individuality. 1459-1462 - Sadaoki Furui, Tomoko Matsui:

Phoneme-level voice individuality used in speaker recognition. 1463-1466 - Satoshi Imaizumi, Hartono Abdoerrachman, Seiji Niimi:

Controllability of voice quality: evidence from physiological and acoustic observations. 1467-1470 - Guus de Krom:

Spectral correlates of breathiness and roughness for different types of vowel fragments. 1471-1474 - John H. Esling, Lynn Marie Heap, Roy C. Snell, B. Craig Dickson:

Analysis of pitch dependence of pharyngeal, faucal, and larynx-height voice quality settings. 1475-1478
Neural Network and Connectionist Approaches
- KyungMin Na, JaeYeol Rheem, SouGuil Ann:

Minimum-error-rate training of predictive neural network models. 1479-1482 - Allen L. Gorin, H. Hanek, Richard C. Rose, Laura G. Miller:

Spoken language acquisition for automated call routing. 1483-1486 - Eliathamby Ambikairajah, Owen Friel, William Millar:

A speech recognition system using both auditory and afferent pathway signal processing. 1487-1490 - Steve Renals, Mike Hochberg:

Using gamma filters to model temporal dependencies in speech. 1491-1494 - Jan P. Verhasselt, Jean-Pierre Martens:

Phone recognition using a transition-controlled, segment-based dp/mlp hybrid. 1495-1498 - Mike Hochberg, Steve Renals, Anthony J. Robinson, Dan J. Kershaw:

Large vocabulary continuous speech recognition using a hybrid connectionist-HMM system. 1499-1502 - Dong Yu, Taiyi Huang, Dao Wen Chen:

A multi-state NN/HMM hybrid method for high performance speech recognition. 1503-1506 - Fikret S. Gürgen, J. M. Song, Robin W. King:

A continuous HMM based preprocessor for modular speech recognition neural networks. 1507-1510 - Ying Cheng, Paul Fortier, Yves Normandin:

System integrating connectionist and ibolic approaches for spoken language understanding. 1511-1514 - Xavier Menéndez-Pidal, Javier Ferreiros, Ricardo de Córdoba, José Manuel Pardo:

Recent work in hybrid neural networks and HMM systems in CSR tasks. 1515-1518 - Jean-François Mari, Dominique Fohr, Yolande Anglade, Jean-Claude Junqua:

Hidden Markov models and selectively trained neural networks for connected confusable word recognition. 1519-1522 - Yochai Konig, Nelson Morgan:

Modeling dynamics in connectionist speech recognition - the time index model. 1523-1526 - Dao Wen Chen, Xiao-Dong Li, San Zhu, Dongxin Xu, Taiyi Huang:

Mandarin syllables recognition by subsyllables dynamic neural network. 1527-1530 - Shigeki Okawa, Christoph Windheuser, Frédéric Bimbot, Katsuhiko Shirai:

Evaluation of phonetic feature recognition with a time-delay neural network. 1531-1534 - Enric Monte, Javier Hernando Pericas:

A self organizing feature map based on the fisher discriminant. 1535-1538 - Richard R. Favero, Fikret S. Gürgen:

Using wavelet dyadic grids and neural networks for speech recognition. 1539-1542 - Hiroaki Hattori:

A normalization method of prediction error for neural networks. 1543-1546 - Philippe Le Cerf, Dirk Van Compernolle:

Recurrent neural network word models for small vocabulary speech recognition. 1547-1550 - Yoshinaga Koto, Shigeru Katagiri:

A novel fuzzy partition model architecture for classifying dynamic patterns. 1551-1554 - Martin Cooke, Phil D. Green, Malcolm Crawford:

Handling missing data in speech recognition. 1555-1558 - Patrick Haffner:

A new probabilistic framework for connectionist time alignment. 1559-1562 - Ken-ichi Iso:

A speech recognition model using internal degrees of freedom. 1563-1566 - Dongxin Xu, Dao Wen Chen, Qian Ma, Bo Xu, Taiyi Huang:

Adaptation of neural network model: comparison of multilayer perceptron and LVQ. 1567-1570 - Takuya Koizumi, Shuji Taniguchi, Ken-ichi Hattori, Mikio Mori:

Simplified sub-neural-networks for accurate phoneme recognition. 1571-1574 - Victoria Rodellar, Victor Nieto Lluis, Pedro Gómez, Daniel Martinez, Mercedes Pérez:

A neural network for phonetically decoding the speech trace. 1575-1578 - Kiyoaki Aikawa, Tsuyoshi Saito:

Noise robust speech recognition using a dynamic-cepstrum. 1579-1582
Speech Analysis and Enhancement
- Toshiyuki Aritsuka, Yoshito Nejime:

Telephone-band speech enhancement based on the fundamental frequency component compensation. 1583-1586 - Nobuyuki Kunieda, Tetsuya Shimamura, Jouji Suzuki, Hiroyuki Yashima:

Reduction of noise level by SPAD (speech processing system by use of auto-difference function). 1587-1590 - Yuki Yoshida, Masanobu Abe:

An algorithm to reconstruct wideband speech from narrowband speech based on codebook mapping. 1591-1594 - Carl W. Seymour, M. Niranjan:

An hmm-based cepstral-domain speech enhancement system. 1595-1598 - Naoto Iwahashi, Yoshinori Sagisaka:

Voice adaptation using multi-functional transformation with weighting by radial basis function networks. 1599-1602 - Hong Tang, Xiaoyuan Zhu, Iain MacLeod, J. Bruce Millar, Michael Wagner:

A dynamic-window weighted-RMS averaging filter applied to speaker identification. 1603-1606 - Hiroshi Yasukawa:

Quality enhancement of band limited speech by filtering and multirate techniques. 1607-1610 - Thanh Tung Le, John S. Mason, Tadashi Kitamura:

Characteristics of multi-layer perceptron models in enhancing degraded speech. 1611-1614 - Adam B. Fineberg, Kevin C. Yu:

A time-frequency analysis technique for speech recognition signal processing. 1615-1618 - Paavo Alku, Erkki Vilkman:

Estimation of the glottal pulseform based on discrete all-pole modeling. 1619-1622 - H. Nishi, M. Kitai:

Analysis and detection of double talk in telephone dialogs. 1623-1626 - Ove Andersen, Paul Dalsgaard:

A self-learning approach to transcription of danish proper names. 1627-1630 - Eisuke Horita, Yoshikazu Miyanaga, Koji Tochinai:

A time-varying analysis based on analytic speech signals. 1631-1634 - Takashi Endo, Shun'ichi Yajima:

New spectrum interpolation method for improving quality of synthesized speech. 1635-1638 - Mark Johnson:

Automatic context-sensitive measurement of the acoustic correlates of distinctive features at landmarks. 1639-1642 - Alain Soquet, Marco Saerens:

A comparison of different acoustic and articulatory representations for the determination of place of articulation of plosives. 1643-1646 - Naotoshi Osaka:

An analysis of voice quality using sinusoidal model. 1647-1650 - Alan Wrench, M. M. Watson, David S. Soutar, A. Gerry Robertson, John Laver:

Fast formant estimation of children's speech. 1651-1654 - Josep M. Salavedra, Enrique Masgrau, Asunción Moreno, Joan Estarellas, Javier Hernando:

Some fast higher order AR estimation techniques applied to parametric wiener filtering. 1655-1658 - Mikio Yamaguchi, Shigeharu Toyoda, Katsuhiro Yada:

Applications of a rule-based speech synthesizer module. 1659-1662 - Jon P. Iles, William H. Edmondson:

Quasi-articulatory formant synthesis. 1663-1666 - Knut Kvale:

On the connection between manual segmentation conventions and "errors" made by automatic segmentation. 1667-1670 - Mutsuko Tomokiyo:

Natural utterance segmentation and discourse label assignment. 1671-1674 - Satoshi Yumoto, Jouji Suzuki, Tetsuya Shimamura:

Possibility of speech synthesis by common voice source. 1675-1678 - Changfu Wang, Wenshen Yue, Keikichi Hirose, Hiroya Fujisaki:

A scheme for Chinese speech synthesis by rule based on pitch-synchronous multi-pulse excitation LP method. 1679-1682 - Anders Lindström, Mats Ljungqvist:

Text processing within a speech synthesis system. 1683-1686 - Pedro M. Carvalho, P. Lopes, Isabel Trancoso, Luís C. Oliveira:

E-mail to voice-mail conversion using a portuguese text-to-speech system. 1687-1690 - Shigeyoshi Kitazawa, Satoshi Kobayashi, Takao Matsunaga, Hideya Ichikawa:

Tempo estimation by wave envelope for recognition of paralinguistic features in spontaneous speech. 1691-1694
Acquisition of Spoken Language
- Teruaki Tsushima, Osamu Takizawa, Midori Sasaki, Satoshi Shiraki, Kanae Nishi, Morio Kohno, Paula Menyuk, Catherine T. Best:

Discrimination of English /r-l/ and /w-y/ by Japanese infants at 6-12 months: language-specific developmental changes in speech perception abilities. 1695-1698 - Hiroaki Kojima, Kazuyo Tanaka, Satoru Hayamizu:

Generating phoneme models for forming phonological concepts. 1699-1702 - Yoko Shimura, Satoshi Imaizumi:

Infant's expression and perception of emotion through vocalizations. 1703-1706 - Tomohiko Ito:

Transition from two-word to multiple-word stage in the course of language acquisition. 1707-1710 - P. V. S. Rao, Nandini Bondale:

BSLP based language grammars for child speech. 1711-1714 - John Nienart, J. Devin McAuley:

Using prediction to learn pre-linguistic speech characteristics: a connectionist model. 1715-1718
Education of Spoken Language
- Michiko Mochizuki-Sudo, Shigeru Kiritani:

Naturalness judgments for stressed vowel duration in second language acquisition. 1719-1722 - Margaret Maeda:

Pre-nuclear intonation in questions of Japanese students in English. 1723-1726 - Junko Tsumaki:

Intonational properties of adverbs in tokyo Japanese. 1727-1730 - Ichiro Miura:

Production and perception of English sentences spoken by Japanese university students. 1731-1734 - Atsuko Kikuchi, Wayne Lawrence:

Using morphological analysis to improve Japanese pronunciation. 1735-1738 - Yukihiro Nishinuma:

How do the French perceive tonal accent in Japanese? experimental evidence. 1739-1742
Speech/Language Database
- Tsuyoshi Morimoto, Noriyoshi Uratani, Toshiyuki Takezawa, Osamu Furuse, Yasuhiro Sobashima, Hitoshi Iida, Atsushi Nakamura, Yoshinori Sagisaka, Norio Higuchi, Yasuhiro Yamazaki:

A speech and language database for speech translation research. 1791-1794 - Lori Lamel, Florian Schiel, Adrian Fourcin, Joseph Mariani, Hans G. Tillmann:

The translanguage English database (TED). 1795-1798 - Ikuo Kudo, Takao Nakama, Nozomi Arai, Nahoko Fujimura:

The data collection of voice across Japan (VAJ) project. 1799-1802 - M. Damhuis, T. I. Boogaart, C. in't Veld, M. Versteijlen, W. Schelvis, L. Bos, Lou Boves:

Creation and analysis of the dutch polyphone corpus. 1803-1806 - Per Rosenbeck, Bo Baungaard, Claus Jacobsen, Dan-Joe Barry:

The design and efficient recording of a 3000 speaker scandinavian telephone speech database: rafael.0. 1807-1810 - Daniel Tapias, Alejandro Acero, J. Esteve, Juan Carlos Torrecilla:

The VESTEL telephone speech database. 1811-1814 - Ronald A. Cole, Mark A. Fanty, Mike Noel, Terri Lander:

Telephone speech corpus development at CSLU1. 1815-1818 - P. E. Kenne, Hamish G. Pearcy, Mary O'Kane:

Derivation of a large speech and natural language database through alignment of court recordings an their transcripts. 1819-1822 - Qiguang Lin, ChiWei Che, Joe French:

Description of the caip speech corpus. 1823-1826 - Rob Kassel:

Automating the design of compact linguistic corpora. 1827-1830 - Kazuyo Tanaka, Kanae Kinebuchi, Naoko Houra, Kazuyuki Takagi, Shuichi Itahashi, Katsunobu Itou, Satoru Hayamizu:

Annotating illocutionary force types and phonological features into a spontaneous dialogue corpus: an experimental study. 1831-1834
Speaker, Language and Phoneme Recognition
- Aaron E. Rosenberg, Chin-Hui Lee, Frank K. Soong:

Cepstral channel normalization techniques for HMM-based speaker verification. 1835-1838 - Vijay Raman, Jayant M. Naik:

Noise reduction for speech recognition and speaker verification in mobile telephony. 1839-1842 - Eluned S. Parris, Michael J. Carey:

Discriminative phonemes for speaker identification. 1843-1846 - Javier Hernando, Climent Nadeu, Carlos Villagrasa, Enric Monte:

Speaker identification in noisy conditions using linear prediction of the one-sided autocorrelation sequence. 1847-1850 - Jialong He, Li Liu, Günther Palm:

A text-independent speaker identification system based on neural networks. 1851-1854 - Fangxin Chen, J. Bruce Millar, Michael Wagner:

Hybrid threshold approach in text-independent speaker verification. 1855-1858 - Yasuo Ariki, Keisuke Doi:

Speaker recognition based on subspace methods. 1859-1862 - Seong-Jin Yun, Yung-Hwan Oh:

Performance improvement of speaker recognition system for small training data. 1863-1866 - B. Yegnanarayana, S. P. Wagh, S. Rajendran:

A speaker verification system using prosodic features. 1867-1870 - William Goldenthal, James R. Glass:

Statistical trajectory models for phonetic recognition. 1871-1874 - Mats Blomberg:

A common phone model representation for speech recognition and synthesis. 1875-1878 - Shubha Kadambe, James Hieronymus:

Spontaneous speech language identification with a knowledge of linguistics. 1879-1882 - Timothy J. Hazen, Victor W. Zue:

Recent improvements in an approach to segment-based automatic language identification. 1883-1886 - Padma Ramesh, David B. Roe:

Language identification with embedded word models. 1887-1890 - Kay M. Berkling, Etienne Barnard:

Language identification of six languages based on a common set of broad phonemes. 1891-1894 - Allan A. Reyes, Takashi Seino, Seiichi Nakagawa:

Three language identification methods based on HMMs. 1895-1898 - Shuichi Itahashi, Jian Xiong Zhou, Kimihito Tanaka:

Spoken language discrimination using speech fundamental frequency. 1899-1902 - Paul Dalsgaard, Ove Andersen:

Application of inter-language phoneme similarities for language identification. 1903-1906 - Hugo Van hamme, Guido Gallopyn, Ludwig Weynants, Bart D'hoore, Hervé Bourlard:

Comparison of acoustic features and robustness tests of a real-time recogniser using a hardware telephone line simulator. 1907-1910 - Shigeki Okawa, Tetsunori Kobayashi, Katsuhiko Shirai:

Phoneme recognition in various styles of utterance based on mutual information criterion. 1911-1914 - Masakatsu Hoshimi, Maki Yamada, Katsuyuki Niyada:

Speaker independent speech recognition method using phoneme similarity vector. 1915-1918 - Kai Hübener, Julie Carson-Berndsen:

Phoneme recognition using acoustic events. 1919-1922 - Parham Mokhtari, Frantz Clermont:

Contributions of selected spectral regions to vowel classification accuracy. 1923-1926 - Climent Nadeu, Biing-Hwang Juang:

Filtering of spectral parameters for speech recognition. 1927-1930 - Barry Arons:

Pitch-based emphasis detection for segmenting speech recordings. 1931-1934 - Zhishun Li, Patrick Kenny:

Overlapping phone segments. 1935-1938 - Maurice K. Wong:

Clustering triphones by phonological mapping. 1939-1942
Speech Perception and Speech Related Disorders
- Nelson Morgan, Hervé Bourlard, Steven Greenberg, Hynek Hermansky:

Stochastic perceptual auditory-event-based models for speech recognition. 1943-1946 - Itaru F. Tatsumi, Hiroya Fujisaki:

Auditory perception of filled and empty time intervals, and mechanism of time discrimination. 1947-1950 - Margaret F. Cheesman, Jennifer C. Armitage, Kimberley Marshall:

Speech perception and growth of masking in younger and older adults. 1951-1954 - Toshio Irino, Roy D. Patterson:

A theory of asymmetric intensity enhancement around acoustic transients. 1955-1958 - Hector R. Javkin, Elizabeth Keate, Norma Antonanzas-Barroso, Ranjun Zou, Karen Youdelman:

Text-to-speech in the speech training of the deaf: adapting models to individual speakers. 1959-1962 - Thomas Holton:

Robust pitch and voicing detection using a model of auditory signal processing. 1963-1966 - Satoshi Imaizumi, Akiko Hayashi, Toshisada Deguchi:

Listener adaptive characteristics in dialogue speech effects of temporal adjustment on emotional aspects of speech. 1967-1970 - Minoru Tsuzaki, Hiroaki Kato, Masako Tanaka:

Effects of acoustic discontinuity and phonemic deviation on the apparent duration of speech segments. 1971-1974 - Chie H. Craig, Richard M. Warren, Tricia B. K. Chirillo:

The influence of context on spoken language perception and processing among elderly and hearing impaired listeners. 1975-1978 - Hiroaki Kato, Minoru Tsuzaki, Yoshinori Sagisaka:

Acceptability of temporal modification in consonant and vowel onsets. 1979-1983 - Weizhong Zhu, Yoshinobu Kikuchi, Yasuo Endo, Hideki Kasuya, Minoru Hirano, Masanao Ohashi:

An integrated acoustic evaluation system of pathologic voice. 1983-1986 - Yumiko Fukuda, Wako Ikehara, Erniko Kamikubo, Shizuo Hiki:

An electronic dictionary of Japanese sign language: design of system and organization of database. 1987-1990 - Yasuo Endo, Hideki Kasuya:

Synthesis of pathological voice based on a stochastic voice source model. 1991-1994 - Hiroshi Hosoi, Yoshiaki Tsuta, Takashi Nishida, Kiyotaka Murata, Fumihiko Ohta, Tsuyoshi Mekata, Yumiko Kato:

Hearing aid evaluation using variable - speech - rate audiometry. 1995-1998 - Fred D. Minifie, Daniel Z. Huang, Jordan R. Green:

Relationship between acoustic measures of vocal perturbation and perceptual judgments of breathiness, harshness, and hoarseness. 1999-2002 - Takashi Ikeda, Kouji Tasaki, Akira Watanabe:

A hearing aid by single resonant analysis for telephonic speech. 2003-2006 - Tsuneo Yamada, Reiko Akahane-Yamada, Winifred Strange:

Perceptual learning of Japanese mora syllables by native speakers of american English an analysis of acquisition processes of speech perception in second language learning. 2007-2010 - Yuichi Ueda, Takayuki Agawa, Akira Watanabe:

A DSP-based amplitude compressor for digital hearing AIDS. 2011-2014 - Amalia Sarabasa:

Perception and production saturation of spoken English as a first phase in reducing a foreign accent. 2015-2018 - Edmund Rooney, Fabrizio Carraro, Will Dempsey, Katie Robertson, Rebecca Vaughan, Mervyn A. Jack, Jonathan Murray:

Harp: an autonomous speech rehabilitation system for hearing-impaired people. 2019-2022 - Reiko Akahane-Yamada, Winifred Strange, James S. Magnuson, John S. Pruitt, William D. Clarke:

The intelligibility of Japanese speakers' production of american English /r/, /i/, and /w/, as evaluated by native speakers of american English. 2023-2026 - Itaru Nagayama, Norio Akamatsu, Toshiki Yoshino:

Phonetic visualization for speech training system by using neural network. 2027-2030 - Elzbieta B. Slawinski:

Perceptual and productive distinction between the English [r] and [l] in prevocalic position by English and Japanese speakers. 2031-2034 - Yasushi Naito, Hidehiko Okazawa, Iwao Honjo, Yosaku Shiomi, Haruo Takahashi, Waka Hoji, Michio Kawano, Hiroshi Ishizu, Sadahiko Nishizawa, Yoshiharu Yonekura, Junji Konishi:

Cortical activation with speech in cochlear implant users: a study with positron emission tomography. 2035-2038 - Kiyoaki Aikawa, Reiko Akahane-Yamada:

Comparative study of spectral representations in measuring the English /r/-/l/ acoustic-perceptual dissimilarity. 2039-2042 - Shigeyoshi Kitazawa, Kazuyuki Muramoto, Juichi Ito:

Acoustic simulation of auditory model based speech processor for cochlear implant system. 2043-2046 - Makio Kashino, Chie H. Craig:

The influence of knowledge and experience during the processing of spoken words: non-native listeners. 2047-2050 - David House:

Perception and production of mood in speech by cochlear implant users. 2051-2054 - Yoshito Nejime, Toshiyuki Aritsuka, Toshiki Imamura, Tohru Ifukube, Jun'ichi Matsushima:

A portable digital speech rate converter and its evaluation by hearing-impaired listeners. 2055-2058
Speech Coding
- Keiichi Funaki, Kazunaga Yoshida, Kazunori Ozawa:

4kb/s speech coding with small computational amount and memory requirement: ULCELP. 2059-2062 - Miguel Angel Ferrer-Ballester, Aníbal R. Figueiras-Vidal:

Improving CELP voice quality by projection similarity measure. 2063-2066 - Hitoshi Ohmuro, Kazunori Mano, Takehiro Moriya:

Variable bit-rate speech coding based on PSI-CELP. 2067-2070 - Sung-Joo Kim, Seung-Jong Park

, Yung-Hwan Oh:
Complexity reduction methods for vector sum excited linear prediction coding. 2071-2074 - Preeti Rao, Yoshiaki Asakawa, Hidetoshi Sekine:

8 kb/s low-delay speech coding with 4 ms frame size. 2075-2078 - Jey-Hsin Yao, Yoshinori Tanaka:

Low-bit-rate speech coding with mixed-excitation and interpolated LPC coefficients. 2079-2082 - Cheung-Fat Chan:

Multi-band excitation coding of speech at 960 bps using split residual VQ and v/UV decision regeneration. 2083-2086 - Kazuhito Koishida, Keiichi Tokuda, Takao Kobayashi, Satoshi Imai:

Speech coding based on adaptive MEL-cepstral analysis for noisy channels. 2087-2090 - Fu-Rong Jean, Hsiao-Chuan Wang:

A two-stage coding of speech LSP parameters based on KLT transform and 2d-prediction. 2091-2094
The Impact of Signal Processing Technologies
- Harry Levitt:

on communication disabilities ********* technologies for signal processing hearing AIDS. 2095-2098 - Futoshi Asano, Yôiti Suzuki, Toshio Sone:

Signal processing techniques applicable to hearing aids. 2099-2102 - Peter Blarney, Gary Dooley, Elvira Parisi:

Combination and comparison of electric stimulation and residual hearing. 2103-2106 - Sotaro Funasaka, Masae Shiroma, Kumiko Yukawa:

Analysis of consonants perception of Japanese 22-channel cochlear implant patients. 2107-2110 - Volker Hohmann, Birger Kollmeier:

Digital hearing aid techniques employing a loudness model for recruitment compensation. 2111-2114 - Akira Nakamura, Nobumasa Seiyama, Atsushi Imai, Tohru Takagi, Eiichi Miyasaka:

A new approach to compensate degeneration of speech intelligibility for elderly listeners. 2115-2118 - Tsuyoshi Mekata, Yoshiyuki Yoshizumi, Yumiko Koto, Etji Noguchi, Yoshinori Yamada:

Development of a portable multi-function digital hearing aid. 2119-2122 - Donald G. Jamieson:

The use of spoken language in the evaluation of assistive listening devices. 2123-2126
Continuous Speech Recognition
- Jean-Luc Gauvain, Lori Lamel, Gilles Adda, Martine Adda-Decker:

Continuous speech dictation in French. 2127-2130 - Ronald A. Cole, Beatrice T. Oshika, Mike Noel, Terri Lander, Mark A. Fanty:

Labeler agreement in phonetic labeling of continuous speech. 2131-2134 - Biing-Hwang Juang, Jay G. Wilpon:

Recent technology developments in connected digit speech recognition. 2135-2138 - Daniel Jurafsky, Chuck Wooters, Gary N. Tajchman, Jonathan Segal, Andreas Stolcke, Eric Fosler, Nelson Morgan:

The berkeley restaurant project. 2139-2142 - Volker Steinbiss, Bach-Hiep Tran, Hermann Ney:

Improvements in beam search. 2143-2146 - Kevin Johnson, Roberto Garigliano, Russell James Collingham:

Data-based control of the search space generated by multiple knowledge bases for speech recognition. 2147-2150 - Atsuhiko Kai, Seiichi Nakagawa:

Evaluation of unknown word processing in a spoken word recognition system. 2151-2154 - Tetsuo Araki, Satoru Ikehara, Hideto Yokokawa:

Using accent information to correctly select Japanese phrases made of strings of syllables. 2155-2158 - Sheryl R. Young:

Estimating recognition confidence: methods for conjoining acoustics, semantics, pragmatics and discourse. 2159-2162 - John W. McDonough, Herbert Gish:

Issues in topic identification on the switchboard corpus. 2163-2166 - Li Deng, Hossein Sameti:

Automatic speech recognition using dynamically defined speech units. 2167-2170 - M. Jones, Philip C. Woodland:

Modelling syllable characteristics to improve a large vocabulary continuous speech recogniser. 2171-2174 - Natividad Prieto, Emilio Sanchis, Luis Palmero:

Continuous speech understanding based on automatic learning of acoustic and semantic models. 2175-2178 - Kazuhiro Kondo, Yu-Hung Kao, Barbara Wheatley:

On inter-phrase context dependencies in continuously read Japanese speech. 2179-2182 - Gernot A. Fink, Franz Kummert, Gerhard Sagerer:

A close high-level interaction scheme for recognition and interpretation of speech. 2183-2186 - Sylvie Coste-Marquis:

Interaction between most reliable acoustic cues and lexical analysis. 2187-2190 - Yasuo Ariki, T. Kawamura:

Simultaneous spotting of phonemes and words in continuous speech. 2191-2194 - Man-Hung Siu, Herbert Gish, Jan Robin Rohlicek:

Predicting word spotting performance. 2195-2198 - Sumio Ohno, Hiroya Fujisaki, Keikichi Hirose:

A method for word spotting in continuous speech using both segmental and contextual likelihood scores. 2199-2202 - Renato de Mori, Diego Giuliani, Roberto Gretter:

Phone-based prefiltering for continuous speech recognition. 2203-2206 - Harald Singer, Jun-ichi Takami:

Speech recognition without grammar or vocabulary constraints. 2207-2210 - Javier Macías Guarasa, Manuel A. Leandro, Xavier Menéndez-Pidal, José Colás, Ascensión Gallardo-Antolín, José Manuel Pardo, Santiago Aguilera:

Comparison of three approaches to phonetic string generation for large vocabulary speech recognition. 2211-2214 - Pietro Laface, Lorenzo Fissore, Franco Ravera:

Automatic generation of words toward flexible vocabulary isolated word recognition. 2215-2218 - H. C. Choi, Robin W. King:

Fast speaker adaptation through spectral transformation for continuous speech recognition. 2219-2222 - Sekharjit Datta:

Dynamic machine adaptation in a multi-speaker isolated word recognition system. 2223-2226 - Sheryl R. Young:

Discourse structure for spontaneous spoken interactions: multi-speaker vs. human-computer dialogs. 2227-2230 - Hansjörg Mixdorff, Hiroya Fujisaki:

Analysis of voice fundamental frequency contours of German utterances using a quantitative model. 2231-2234

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














