


default search action
Bayya Yegnanarayana
B. Yegnanarayana 0001
Person information
- affiliation: International Institute of Information Technology, Hyderabad, India
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [j81]Sudarsana Reddy Kadiri
, Paavo Alku
, B. Yegnanarayana
:
Analysis of Instantaneous Frequency Components of Speech Signals for Epoch Extraction. Comput. Speech Lang. 78: 101443 (2023) - 2022
- [j80]B. H. V. S. Narayanamurthy, J. V. Satyanarayana
, B. Yegnanarayana:
On Improving the Accuracy and Robustness of Time Delay Estimation of Broadband Signals. Circuits Syst. Signal Process. 41(1): 514-531 (2022) - 2021
- [j79]Vishala Pannala
, B. Yegnanarayana:
A neural network approach for speech activity detection for Apollo corpus. Comput. Speech Lang. 65: 101137 (2021) - [j78]RaviShankar Prasad, B. Yegnanarayana:
A study of vowel nasalization using instantaneous spectra. Comput. Speech Lang. 69: 101214 (2021) - [j77]Sudarsana Reddy Kadiri
, Paavo Alku
, Bayya Yegnanarayana
:
Extraction and Utilization of Excitation Information of Speech: A Review. Proc. IEEE 109(12): 1920-1941 (2021) - [c168]Preetam Prabhu Srikar Dammu, Srinivasa Rao Chalamala, Ajeet Kumar Singh, Bayya Yegnanarayana:
Interpretable and Robust Face Verification. CIKM Workshops 2021 - 2020
- [j76]Sudarsana Reddy Kadiri
, B. Yegnanarayana:
Determination of glottal closure instants from clean and telephone quality speech signals using single frequency filtering. Comput. Speech Lang. 64: 101097 (2020) - [j75]B. H. V. S. Narayana Murthy, B. Yegnanarayana
, Sudarsana Reddy Kadiri
:
Time Delay Estimation from Mixed Multispeaker Speech Signals Using Single Frequency Filtering. Circuits Syst. Signal Process. 39(4): 1988-2005 (2020) - [j74]Sudarsana Reddy Kadiri
, P. Gangamohan, Suryakanth V. Gangashetty, Paavo Alku
, B. Yegnanarayana
:
Excitation Features of Speech for Emotion Recognition Using Neutral Speech as Reference. Circuits Syst. Signal Process. 39(9): 4459-4481 (2020) - [j73]Sudarsana Reddy Kadiri
, RaviShankar Prasad, B. Yegnanarayana:
Detection of glottal closure instant and glottal open region from speech signals using spectral flatness measure. Speech Commun. 116: 30-43 (2020) - [j72]Sudarsana Reddy Kadiri
, Paavo Alku
, B. Yegnanarayana:
Analysis and classification of phonation types in speech and singing voice. Speech Commun. 118: 33-47 (2020) - [c167]Sudarsana Reddy Kadiri
, Paavo Alku
, B. Yegnanarayana:
Comparison of Glottal Closure Instants Detection Algorithms for Emotional Speech. ICASSP 2020: 7379-7383 - [c166]B. Yegnanarayana, Joseph M. Anand, Vishala Pannala
:
Enhancing Formant Information in Spectrographic Display of Speech. INTERSPEECH 2020: 165-169 - [c165]B. H. V. S. Narayana Murthy, J. V. Satyanarayana, Nivedita Chennupati, B. Yegnanarayana:
Instantaneous Time Delay Estimation of Broadband Signals. INTERSPEECH 2020: 5081-5085 - [i2]RaviShankar Prasad, B. Yegnanarayana:
A study of vowel nasalization using instantaneous spectra. CoRR abs/2009.06416 (2020)
2010 – 2019
- 2019
- [j71]Nivedita Chennupati, Sudarsana Reddy Kadiri
, B. Yegnanarayana:
Spectral and temporal manipulations of SFF envelopes for enhancement of speech intelligibility in noise. Comput. Speech Lang. 54: 86-105 (2019) - [i1]Thomas Drugman, Paavo Alku, Abeer Alwan, Bayya Yegnanarayana:
Glottal Source Processing: from Analysis to Applications. CoRR abs/1912.12604 (2019) - 2018
- [j70]Nivedita Chennupati, Sudarsana Reddy Kadiri
, Bayya Yegnanarayana:
Significance of phase in single frequency filtering outputs of speech signals. Speech Commun. 97: 66-72 (2018) - [c164]RaviShankar Prasad, Sudarsana Reddy Kadiri
, Suryakanth V. Gangashetty
, Bayya Yegnanarayana:
Discriminating Nasals and Approximants in English Language Using Zero Time Windowing. INTERSPEECH 2018: 177-181 - [c163]RaviShankar Prasad, Bayya Yegnanarayana:
Identification and Classification of Fricatives in Speech Using Zero Time Windowing Method. INTERSPEECH 2018: 187-191 - [c162]Sudarsana Reddy Kadiri
, Bayya Yegnanarayana:
Breathy to Tense Voice Discrimination using Zero-Time Windowing Cepstral Coefficients (ZTWCCs). INTERSPEECH 2018: 232-236 - [c161]Sudarsana Reddy Kadiri
, Bayya Yegnanarayana:
Analysis and Detection of Phonation Modes in Singing Voice using Excitation Source Features and Single Frequency Filtering Cepstral Coefficients (SFFCC). INTERSPEECH 2018: 441-445 - [c160]Gunnam Aneeja, Sudarsana Reddy Kadiri
, Bayya Yegnanarayana:
Detection of Glottal Closure Instants in Degraded Speech Using Single Frequency Filtering Analysis. INTERSPEECH 2018: 2300-2304 - [c159]Sudarsana Reddy Kadiri
, Bayya Yegnanarayana:
Estimation of Fundamental Frequency from Singing Voice Using Harmonics of Impulse-like Excitation Source. INTERSPEECH 2018: 2319-2323 - [c158]B. H. V. S. Narayanamurthy, J. V. Satyanarayana, Bayya Yegnanarayana:
Determining Speaker Location from Speech in a Practical Environment. INTERSPEECH 2018: 2386-2387 - [c157]P. Gangamohan, Suryakanth V. Gangashetty, B. Yegnanarayana:
Time-frequency spectral error for analysis of high arousal speech. SMM 2018 - [e1]B. Yegnanarayana:
19th Annual Conference of the International Speech Communication Association, Interspeech 2018, Hyderabad, India, September 2-6, 2018. ISCA 2018 [contents] - 2017
- [j69]Sudarsana Reddy Kadiri
, B. Yegnanarayana:
Epoch extraction from emotional speech using single frequency filtering approach. Speech Commun. 86: 52-63 (2017) - [j68]G. Aneeja, B. Yegnanarayana:
Extraction of Fundamental Frequency From Degraded Speech Using Temporal Envelopes at High SNR Frequencies. IEEE ACM Trans. Audio Speech Lang. Process. 25(4): 829-838 (2017) - [c156]Sudarsana Reddy Kadiri
, B. Yegnanarayana:
Speech polarity detection using strength of impulse-like excitation extracted from speech epochs. ICASSP 2017: 5610-5614 - [c155]Nivedita Chennupati, B. H. V. S. Narayana Murthy, B. Yegnanarayana:
A Signal Processing Approach for Speaker Separation Using SFF Analysis. INTERSPEECH 2017: 2034-2035 - [c154]P. Gangamohan
, B. Yegnanarayana:
A Robust and Alternative Approach to Zero Frequency Filtering Method for Epoch Extraction. INTERSPEECH 2017: 2297-2300 - [c153]Bhanu Teja Nellore, RaviShankar Prasad, Sudarsana Reddy Kadiri
, Suryakanth V. Gangashetty
, B. Yegnanarayana:
Locating Burst Onsets Using SFF Envelope and Phase Information. INTERSPEECH 2017: 3023-3027 - 2016
- [c152]Sri Harsha Dumpala, Bhanu Teja Nellore, Raghu Ram Nevali, Suryakanth V. Gangashetty
, B. Yegnanarayana:
Robust Vowel Landmark Detection Using Epoch-Based Features. INTERSPEECH 2016: 160-164 - [c151]Sri Harsha Dumpala, P. Gangamohan, Suryakanth V. Gangashetty
, B. Yegnanarayana:
Use of Vowels in Discriminating Speech-Laugh from Laughter and Neutral Speech. INTERSPEECH 2016: 1437-1441 - [c150]Vishala Pannala
, G. Aneeja, Sudarsana Reddy Kadiri
, B. Yegnanarayana:
Robust Estimation of Fundamental Frequency Using Single Frequency Filtering Approach. INTERSPEECH 2016: 2155-2159 - [c149]Vinay Kumar Mittal, B. Yegnanarayana:
A sparse representation of the excitation source characteristics of nonnormal speech sounds. ISCSLP 2016: 1-5 - [p1]P. Gangamohan
, Sudarsana Reddy Kadiri
, B. Yegnanarayana:
Analysis of Emotional Speech - A Review. Toward Robotic Socially Believable Behaving Systems (I) 2016: 205-238 - 2015
- [j67]Vinay Kumar Mittal, Bayya Yegnanarayana:
Analysis of production characteristics of laughter. Comput. Speech Lang. 30(1): 99-115 (2015) - [j66]G. Aneeja, B. Yegnanarayana:
Single Frequency Filtering Approach for Discriminating Speech and Nonspeech. IEEE ACM Trans. Audio Speech Lang. Process. 23(4): 705-717 (2015) - [c148]Sudarsana Reddy Kadiri
, B. Yegnanarayana:
Analysis of singing voice for epoch extraction using Zero Frequency Filtering method. ICASSP 2015: 4260-4264 - [c147]Srinivasa Rao Chalamala, Balakrishna Gudla, B. Yegnanarayana, K. Anitha Sheela:
Improved lip contour extraction for visual speech recognition. ICCE 2015: 459-462 - [c146]Srinivasa Rao Chalamala, Santosh Kumar Jami, B. Yegnanarayana:
Enhanced face recognition using Cross Local Radon Binary Patterns. ICCE 2015: 481-484 - [c145]Sudarsana Reddy Kadiri, P. Gangamohan, Suryakanth V. Gangashetty, Bayya Yegnanarayana:
Analysis of excitation source features of speech for emotion recognition. INTERSPEECH 2015: 1324-1328 - [c144]Sri Harsha Dumpala, Bhanu Teja Nellore, Raghu Ram Nevali, Suryakanth V. Gangashetty, Bayya Yegnanarayana:
Robust features for sonorant segmentation in continuous speech. INTERSPEECH 2015: 1987-1991 - [c143]RaviShankar Prasad, Bayya Yegnanarayana:
Robust pitch estimation in noisy speech using ZTW and group delay function. INTERSPEECH 2015: 3289-3292 - [c142]Abhijeet Saxena, B. Yegnanarayana:
Distinctive feature based representation of speech for query-by-example spoken term detection. INTERSPEECH 2015: 3680-3684 - 2014
- [j65]Thomas Drugman, Paavo Alku
, Abeer Alwan, Bayya Yegnanarayana:
Glottal source processing: From analysis to applications. Comput. Speech Lang. 28(5): 1117-1138 (2014) - [j64]Anand Joseph Xavier Medabalimi, Guruprasad Seshadri, Bayya Yegnanarayana:
Extraction of formant bandwidths using properties of group delay functions. Speech Commun. 63: 70-83 (2014) - [c141]Sri Harsha Dumpala, Karthik Venkat Sridaran, Suryakanth V. Gangashetty
, B. Yegnanarayana:
Analysis of laughter and speech-laugh signals using excitation source information. ICASSP 2014: 975-979 - [c140]Basil George, B. Yegnanarayana:
Unsupervised query-by-example spoken term detection using segment-based Bag of Acoustic Words. ICASSP 2014: 7133-7137 - [c139]Sudarsana Reddy Kadiri, P. Gangamohan, Vinay Kumar Mittal, B. Yegnanarayana:
Naturalistic Audio-Visual Emotion Database. ICON 2014: 206-213 - [c138]Sudarsana Reddy Kadiri, P. Gangamohan, B. Yegnanarayana:
Discriminating Neutral and Emotional Speech using Neural Networks. ICON 2014: 214-221 - [c137]Vinay Kumar Mittal, Bayya Yegnanarayana:
An Automatic Shout Detection System Using Speech Production Features. MA3HMI@INTERSPEECH 2014: 88-98 - [c136]Vinay Kumar Mittal, B. Yegnanarayana:
Significance of aperiodicity in the pitch perception of expressive voices. INTERSPEECH 2014: 504-508 - [c135]P. Gangamohan, Sudarsana Reddy Kadiri
, Suryakanth V. Gangashetty, B. Yegnanarayana:
Excitation source features for discrimination of anger and happy emotions. INTERSPEECH 2014: 1253-1257 - [c134]Basil George, Abhijeet Saxena, Gautam Varma Mantena, Kishore Prahallad, B. Yegnanarayana:
Unsupervised query-by-example spoken term detection using bag of acoustic words and non-segmental dynamic time warping. INTERSPEECH 2014: 1742-1746 - [c133]Vinay Kumar Mittal, B. Yegnanarayana:
Study of changes in glottal vibration characteristics during laughter. INTERSPEECH 2014: 1777-1781 - [c132]G. Aneeja, B. Yegnanarayana:
Speech detection in transient noises. INTERSPEECH 2014: 2356-2360 - 2013
- [j63]Bayya Yegnanarayana, Dhananjaya N. Gowda:
Spectro-temporal analysis of speech signals using zero-time windowing and group delay function. Speech Commun. 55(6): 782-795 (2013) - [c131]Vinay Kumar Mittal, B. Yegnanarayana:
Production features for detection of shouted speech. CCNC 2013: 106-111 - [c130]Apoorv Reddy Arrabothu, Nivedita Chennupati, B. Yegnanarayana:
Syllable nuclei detection using perceptually significant features. INTERSPEECH 2013: 963-967 - [c129]P. Gangamohan, Sudarsana Reddy Kadiri
, B. Yegnanarayana:
Analysis of emotional speech at subsegmental level. INTERSPEECH 2013: 1916-1920 - [c128]RaviShankar Prasad, B. Yegnanarayana:
Acoustic segmentation of speech using zero time liftering (ZTL). INTERSPEECH 2013: 2292-2296 - 2012
- [j62]Hussien Seid Worku, B. Yegnanarayana, S. Rajendran:
Spotting glottal stop in Amharic in continuous speech. Comput. Speech Lang. 26(4): 293-305 (2012) - [j61]Anil Kumar Sao, B. Yegnanarayana:
Edge extraction using zero-frequency resonator. Signal Image Video Process. 6(2): 287-300 (2012) - [c127]P. Gangamohan
, Vinay Kumar Mittal, Bayya Yegnanarayana:
A Flexible Analysis Synthesis Tool (FAST) for studying the characteristic features of emotion in speech. CCNC 2012: 250-254 - [c126]D. Gomathi, Sathya Adithya Thati, Karthik Venkat Sridaran, Bayya Yegnanarayana:
Analysis of Mimicry Speech. INTERSPEECH 2012: 695-698 - [c125]Vinay Kumar Mittal, N. Dhananjaya, Bayya Yegnanarayana:
Effect of Tongue Tip Trilling on the Glottal Excitation Source. INTERSPEECH 2012: 1596-1599 - 2011
- [j60]Guruprasad Seshadri, Bayya Yegnanarayana:
Performance of an Event-Based Instantaneous Fundamental Frequency Estimator for Distant Speech Signals. IEEE Trans. Speech Audio Process. 19(7): 1853-1864 (2011) - [c124]N. Dhananjaya, B. Yegnanarayana, Suryakanth V. Gangashetty:
Acoustic-phonetic information from excitation source for refining manner hypotheses of a phone recognizer. ICASSP 2011: 5252-5255 - [c123]Bayya Yegnanarayana, S. R. Mahadeva Prasanna, Sunitha Guruprasad:
Study of robustness of zero frequency resonator method for extraction of fundamental frequency. ICASSP 2011: 5392-5395 - [c122]Bayya Yegnanarayana, Anand Joseph Xavier Medabalimi, Suryakanth V. Gangashetty, N. Dhananjaya:
Decomposition of speech signals for analysis of aperiodic components of excitation. ICASSP 2011: 5396-5399 - [c121]D. Govind, S. R. Mahadeva Prasanna, Bayya Yegnanarayana:
Neutral to Target Emotion Conversion Using Source and Suprasegmental Information. INTERSPEECH 2011: 2969-2972 - [c120]Anil Kumar Sao, B. Yegnanarayana:
Laplacian of smoothed image as representation for face recognition. WIFS 2011: 1-6 - 2010
- [j59]C. Krishna Mohan
, B. Yegnanarayana:
Classification of sport videos using edge-based features and autoassociative neural network models. Signal Image Video Process. 4(1): 61-73 (2010) - [j58]Anil Kumar Sao, B. Yegnanarayana:
On the use of phase of the Fourier transform for face recognition under variations in illumination. Signal Image Video Process. 4(3): 353-358 (2010) - [j57]N. Dhananjaya, B. Yegnanarayana:
Voiced/Nonvoiced Detection Based on Robustness of Voiced Epochs. IEEE Signal Process. Lett. 17(3): 273-276 (2010) - [j56]Srinivas Desai, Alan W. Black, B. Yegnanarayana, Kishore Prahallad:
Spectral Mapping Using Artificial Neural Networks for Voice Conversion. IEEE Trans. Speech Audio Process. 18(5): 954-964 (2010) - [c119]B. Yegnanarayana, S. R. Mahadeva Prasanna:
Analysis of instantaneous F0 contours from two speakers mixed signal using zero frequency filtering. ICASSP 2010: 5074-5077 - [c118]Sri Harish Reddy Mallidi, Kishore Prahallad, Suryakanth V. Gangashetty, B. Yegnanarayana:
Significance of pitch synchronous analysis for speaker recognition using AANN models. INTERSPEECH 2010: 669-672 - [c117]Anand Joseph Xavier Medabalimi, Sri Harish Reddy Mallidi, B. Yegnanarayana:
Speaker-dependent mapping of source and system features for enhancement of throat microphone speech. INTERSPEECH 2010: 985-988 - [c116]B. Avinash, Sunitha Guruprasad, B. Yegnanarayana:
Exploring subsegmental and suprasegmental features for a text-dependent speaker verification in distant speech signals. INTERSPEECH 2010: 1073-1076
2000 – 2009
- 2009
- [j55]K. Sreenivasa Rao, B. Yegnanarayana:
Intonation modeling for Indian languages. Comput. Speech Lang. 23(2): 240-256 (2009) - [j54]K. Sreenivasa Rao, B. Yegnanarayana:
Duration modification using glottal closure instants and vowel onset points. Speech Commun. 51(12): 1263-1269 (2009) - [j53]K. Sri Rama Murty
, Bayya Yegnanarayana, Joseph M. Anand
:
Characterization of Glottal Activity From Speech Signals. IEEE Signal Process. Lett. 16(6): 469-472 (2009) - [j52]B. Yegnanarayana, K. Sri Rama Murty
:
Event-Based Instantaneous Fundamental Frequency Estimation From Speech Signals. IEEE Trans. Speech Audio Process. 17(4): 614-624 (2009) - [j51]B. Yegnanarayana, R. Kumaraswamy
, K. Sri Rama Murty
:
Determining Mixing Parameters From Multispeaker Data Using Speech-Specific Information. IEEE Trans. Speech Audio Process. 17(6): 1196-1207 (2009) - [c115]Anil Kumar Sao, B. Yegnanarayana:
Analytic Phase-based Representation for Face Recognition. ICAPR 2009: 453-456 - [c114]Srinivas Desai, E. Veera Raghavendra, B. Yegnanarayana, Alan W. Black, Kishore Prahallad:
Voice conversion using Artificial Neural Networks. ICASSP 2009: 3893-3896 - [c113]Joseph M. Anand, B. Yegnanarayana, Sanjeev Gupta, M. R. Kesheorey:
Speaker dependent mapping for low bit rate coding of throat microphone speech. INTERSPEECH 2009: 1087-1090 - [c112]G. Bapineedu, B. Avinash, Suryakanth V. Gangashetty, B. Yegnanarayana:
Analysis of Lombard speech using excitation source information. INTERSPEECH 2009: 1091-1094 - [c111]K. Sudheer Kumar, Sri Harish Reddy Mallidi, K. Sri Rama Murty, B. Yegnanarayana:
Analysis of laugh signals for detecting in continuous speech. INTERSPEECH 2009: 1591-1594 - [c110]Hussien Seid Worku, S. Rajendran, B. Yegnanarayana:
Acoustic characteristics of ejectives in amharic. INTERSPEECH 2009: 2287-2290 - 2008
- [j50]S. Palanivel
, B. Yegnanarayana:
Multimodal person authentication using speech, face and visual speech. Comput. Vis. Image Underst. 109(1): 44-55 (2008) - [j49]N. Dhananjaya, B. Yegnanarayana:
Speaker change detection in casual conversations using excitation source features. Speech Commun. 50(2): 153-161 (2008) - [j48]Leena Mary, B. Yegnanarayana:
Extraction and representation of prosodic features for language and speaker recognition. Speech Commun. 50(10): 782-796 (2008) - [j47]K. Sri Rama Murty
, B. Yegnanarayana:
Epoch Extraction From Speech Signals. IEEE Trans. Speech Audio Process. 16(8): 1602-1613 (2008) - [j46]Naresh P. Cuntoor, B. Yegnanarayana, Rama Chellappa:
Activity Modeling Using Event Probability Sequences. IEEE Trans. Image Process. 17(4): 594-607 (2008) - [c109]E. Veera Raghavendra, Srinivas Desai, B. Yegnanarayana, Alan W. Black, Kishore Prahallad:
Blizzard 2008: Experiments on Unit Size for Unit Selection Speech Synthesis. Blizzard Challenge 2008 - [c108]Joel Pinto, B. Yegnanarayana, Hynek Hermansky
, Mathew Magimai-Doss:
Exploiting contextual information for improved phoneme recognition. ICASSP 2008: 4449-4452 - [c107]C. Krishna Mohan
, N. Dhananjaya, B. Yegnanarayana:
Video Shot Segmentation Using Late Fusion Technique. ICMLA 2008: 267-270 - [c106]Sachin Joshi, Kishore Prahallad, B. Yegnanarayana:
AANN-HMM models for speaker verification and speech recognition. IJCNN 2008: 2681-2688 - [c105]N. Dhananjaya, S. Rajendran, B. Yegnanarayana:
Features for automatic detection of voice bars in continuous speech. INTERSPEECH 2008: 1321-1324 - [c104]B. Yegnanarayana, S. Rajendran, Hussien Seid Worku, N. Dhananjaya:
Analysis of glottal stops in speech signals. INTERSPEECH 2008: 1481-1484 - [c103]E. Veera Raghavendra, B. Yegnanarayana, Alan W. Black, Kishore Prahallad:
Building sleek synthesizers for multi-lingual screen reader. INTERSPEECH 2008: 1865-1868 - [c102]K. Sri Rama Murty, Saurav Khurana, Yogendra Umesh Itankar, M. R. Kesheorey, B. Yegnanarayana:
Efficient representation of throat microphone speech. INTERSPEECH 2008: 2610-2613 - [c101]E. Veera Raghavendra, B. Yegnanarayana, Kishore Prahallad:
Speech synthesis using approximate matching of syllables. SLT 2008: 37-40 - [c100]E. Veera Raghavendra, Srinivas Desai, B. Yegnanarayana, Alan W. Black, Kishore Prahallad:
Global syllable set for building speech synthesis in Indian languages. SLT 2008: 49-52 - 2007
- [j45]K. Sreenivasa Rao, B. Yegnanarayana:
Modeling durations of syllables using neural networks. Comput. Speech Lang. 21(2): 282-295 (2007) - [j44]A. Shahina
, B. Yegnanarayana:
Mapping Speech Spectra from Throat Microphone to Close-Speaking Microphone: A Neural Network Approach. EURASIP J. Adv. Signal Process. 2007 (2007) - [j43]Anil Kumar Sao, B. Yegnanarayana, B. V. K. Vijaya Kumar
:
Significance of image representation for face verification. Signal Image Video Process. 1(3): 225-237 (2007) - [j42]R. Kumaraswamy
, K. Sri Rama Murty
, Bayya Yegnanarayana:
Determining Number of Speakers From Multispeaker Speech Signals Using Excitation Source Information. IEEE Signal Process. Lett. 14(7): 481-484 (2007) - [j41]K. Sreenivasa Rao, S. R. Mahadeva Prasanna, Bayya Yegnanarayana:
Determination of Instants of Significant Excitation in Speech Using Hilbert Envelope and Group Delay Function. IEEE Signal Process. Lett. 14(10): 762-765 (2007) - [j40]Anil Kumar Sao, B. Yegnanarayana:
Face Verification Using Template Matching. IEEE Trans. Inf. Forensics Secur. 2(3-2): 636-641 (2007) - [c99]C. Krishna Mohan, B. Yegnanarayana:
Edge-based Sports Video Classification using HMM. IICAI 2007: 559-564 - [c98]Sunitha Guruprasad, B. Yegnanarayana, K. Sri Rama Murty:
Detection of instants of glottal closure using characteristics of excitation source. INTERSPEECH 2007: 554-557 - [c97]K. Sri Rama Murty, B. Yegnanarayana, Sunitha Guruprasad:
Voice activity detection in degraded speech using excitation source information. INTERSPEECH 2007: 2941-2944 - 2006
- [j39]S. R. Mahadeva Prasanna, Cheedella S. Gupta, B. Yegnanarayana:
Extraction of speaker-specific excitation information from linear prediction residual of speech. Speech Commun. 48(10): 1243-1261 (2006) - [j38]K. Sri Rama Murty
, Bayya Yegnanarayana:
Combining evidence from residual phase and MFCC features for speaker recognition. IEEE Signal Process. Lett. 13(1): 52-55 (2006) - [j37]K. Sreenivasa Rao, B. Yegnanarayana:
Prosody modification using instants of significant excitation. IEEE Trans. Speech Audio Process. 14(3): 972-980 (2006) - [c96]