default search action
Yoshihiko Gotoh
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [i3]Rabab Algadhy, Yoshihiko Gotoh, Steve Maddock:
The impact of differences in facial features between real speakers and 3D face models on synthesized lip motions. CoRR abs/2407.17253 (2024) - 2023
- [c44]Jason Clarke, Yoshihiko Gotoh, Stefan Goetze:
Improving Audiovisual Active Speaker Detection in Egocentric Recordings with the Data-Efficient Image Transformer. ASRU 2023: 1-8 - [c43]Abdulaziz Alrashidi, Peter Cudd, Charith Abhayaratne, Yoshihiko Gotoh:
Exploration of verbal descriptions and dynamic indoors environments for people with sight loss. CHI Extended Abstracts 2023: 110:1-110:6 - 2020
- [j10]Manal Al Ghamdi, Yoshihiko Gotoh:
Graph-based topic models for trajectory clustering in crowd videos. Mach. Vis. Appl. 31(5): 39 (2020)
2010 – 2019
- 2019
- [c42]Rabab Algadhy, Yoshihiko Gotoh, Steve Maddock:
3D Visual Speech Animation Using 2D Videos. ICASSP 2019: 2367-2371 - 2018
- [c41]Manal Alghamdi, Yoshihiko Gotoh:
Graph-based Correlated Topic Model for Motion Patterns Analysis in Crowded Scenes from Tracklets. BMVC 2018: 311 - [c40]Manal Al Ghamdi, Yoshihiko Gotoh:
Graph-Based Correlated Topic Model for Trajectory Clustering in Crowded Videos. WACV 2018: 1029-1037 - 2017
- [j9]Muhammad Usman Ghani Khan, Yoshihiko Gotoh:
Generating natural language tags for video information management. Mach. Vis. Appl. 28(3-4): 243-265 (2017) - [c39]Nouf Al Harbi, Yoshihiko Gotoh:
Natural Language Descriptions for Human Activities in Video Streams. INLG 2017: 85-94 - [c38]Muhammad Usman Ghani Khan, Yoshihiko Gotoh, Nudrat Nida:
Medical Image Colorization for Better Visualization and Segmentation. MIUA 2017: 571-580 - 2016
- [c37]Nouf Al Harbi, Yoshihiko Gotoh:
Natural Language Descriptions of Human Activities Scenes: Corpus Generation and Analysis. VL@ACL 2016 - [c36]Samyan Qayyum Wahla, Sahar Waqar, Muhammad Usman Ghani Khan, Yoshihiko Gotoh:
The University of Sheffield and University of Engineering & Technology, Lahore at TRECVID 2016: Video to Text Description Task. TRECVID 2016 - 2015
- [j8]Nouf Al Harbi, Yoshihiko Gotoh:
A unified spatio-temporal human body region tracking approach to action recognition. Neurocomputing 161: 56-64 (2015) - [j7]Muhammad Usman Ghani Khan, Nouf Al Harbi, Yoshihiko Gotoh:
A framework for creating natural language descriptions of video streams. Inf. Sci. 303: 61-82 (2015) - [c35]Atiqah Izzati Masrani, Yoshihiko Gotoh:
Corpus Generation and Analysis: Incorporating Audio Data Towards Curbing Missing Information. KDWeb 2015: 89-100 - [c34]Maira Alvi, Muhammad Usman Ghani Khan, Yoshihiko Gotoh, Mehroz Sadiq, Mubeen Aslam:
University of Engineering & Technology, Lahore / The University of Sheffield at TRECVID 2015: Instance Search. TRECVID 2015 - 2014
- [c33]Manal Al Ghamdi, Yoshihiko Gotoh:
Video Clip Retrieval by Graph Matching. ECIR 2014: 412-417 - [c32]Manal Al Ghamdi, Yoshihiko Gotoh:
Alignment of nearly-repetitive contents in a video stream with manifold embedding. ICASSP 2014: 1255-1259 - [c31]Manal Al Ghamdi, Yoshihiko Gotoh:
Manifold Matching with Application to Instance Search Based on Video Queries. ICISP 2014: 477-486 - [c30]Sana Amanat, Muhammad Usman Ghani Khan, Nudrat Nida, Yoshihiko Gotoh:
The University of Sheffield and University of Engineering & Technology, Lahore at TECVID 2014: Instance Search Task. TRECVID 2014 - 2013
- [c29]Manal Al Ghamdi, Yoshihiko Gotoh:
Spatio-temporal Manifold Embedding for Nearly-Repetitive Contents in a Video Stream. CAIP (1) 2013: 70-77 - [c28]Nouf Al Harbi, Yoshihiko Gotoh:
Spatio-temporal Human Body Segmentation from Video Stream. CAIP (1) 2013: 78-85 - [c27]Muhammad Usman Ghani Khan, Khawar Bashir, Abad Ali Shah, Lei Zhang, Yoshihiko Gotoh, Pervaiz Iqbal Khan, Mehwish Amiruddin:
The University of Sheffield , Harbin University and University of Engineering & Technology, Lahore at TRECVID 2013: Instance Search & Semantic Indexing. TRECVID 2013 - 2012
- [c26]Manal Al Ghamdi, Nouf Al Harbi, Yoshihiko Gotoh:
Spatio-temporal Video Representation with Locality-Constrained Linear Coding. ECCV Workshops (3) 2012: 101-110 - [c25]Manal Al Ghamdi, Lei Zhang, Yoshihiko Gotoh:
Spatio-temporal SIFT and Its Application to Human Action Classification. ECCV Workshops (1) 2012: 301-310 - [c24]Muhammad Usman Ghani Khan, Rao Muhammad Adeel Nawab, Yoshihiko Gotoh:
Natural Language Descriptions of Visual Scenes Corpus Generation and Analysis. ESIRMT/HyTra@EACL 2012: 38-47 - [c23]Muhammad Usman Ghani Khan, Lei Zhang, Yoshihiko Gotoh:
Generating coherent natural language annotations for video streams. ICIP 2012: 2893-2896 - [c22]Manal Al Ghamdi, Muhammad Usman Ghani Khan, Lei Zhang, Yoshihiko Gotoh:
The University of Sheffield and Harbin Engineering University at TRECVID 2012: Instance Search. TRECVID 2012 - 2011
- [c21]Muhammad Usman Ghani Khan, Lei Zhang, Yoshihiko Gotoh:
Towards coherent natural language description of video streams. ICCV Workshops 2011: 664-671 - [c20]Lei Zhang, Muhammad Usman Ghani Khan, Yoshihiko Gotoh:
Video scene classification based on natural language description. ICCV Workshops 2011: 942-949 - [c19]Muhammad Usman Ghani Khan, Lei Zhang, Yoshihiko Gotoh:
Human Focused Video Description. ICCV Workshops 2011: 1480-1487 - 2010
- [c18]Siripinyo Chantamunee, Yoshihiko Gotoh:
Nearly-repetitive video synchronisation using nonlinear manifold embedding. ICASSP 2010: 2282-2285
2000 – 2009
- 2009
- [j6]BalaKrishna Kolluru, Yoshihiko Gotoh:
On the subjectivity of human-authored summaries. Nat. Lang. Eng. 15(2): 193-213 (2009) - 2008
- [j5]Heidi Christensen, Yoshihiko Gotoh, Steve Renals:
A Cascaded Broadcast News Highlighter. IEEE Trans. Speech Audio Process. 16(1): 151-161 (2008) - [c17]Siripinyo Chantamunee, Yoshihiko Gotoh:
University of Sheffield at TRECVID 2008: Rushes Summarisation and Video Copy Detection. TRECVID 2008 - 2007
- [c16]BalaKrishna Kolluru, Yoshihiko Gotoh:
Relative evaluation of informativeness in machine generated summaries. INTERSPEECH 2007: 1338-1341 - [c15]BalaKrishna Kolluru, Yoshihiko Gotoh:
Speaker role based structural classification of broadcast news stories. INTERSPEECH 2007: 2593-2596 - [c14]Siripinyo Chantamunee, Yoshihiko Gotoh:
University of Sheffield at TRECVID 2007: Shot Boundary Detection and Rushes Summarisation. TRECVID 2007 - 2006
- [c13]Jana Urban, Xavier Hilaire, Frank Hopfgartner, Robert Villa, Joemon M. Jose, Siripinyo Chantamunee, Yoshihiko Gotoh:
Glasgow University at TRECVid 2006. TRECVID 2006 - 2005
- [c12]BalaKrishna Kolluru, Yoshihiko Gotoh:
On the Subjectivity of Human Authored Summaries. IEEvaluation@ACL 2005: 9-16 - [c11]Heidi Christensen, BalaKrishna Kolluru, Yoshihiko Gotoh, Steve Renals:
Maximum entropy segmentation of broadcast news. ICASSP (1) 2005: 1029-1032 - [c10]BalaKrishna Kolluru, Heidi Christensen, Yoshihiko Gotoh:
Multi-stage compaction approach to broadcast news summarisation. INTERSPEECH 2005: 69-72 - 2004
- [c9]Heidi Christensen, BalaKrishna Kolluru, Yoshihiko Gotoh, Steve Renals:
From Text Summarisation to Style-Specific Summarisation for Broadcast News. ECIR 2004: 223-237 - 2000
- [c8]Yoshihiko Gotoh, Steve Renals:
Statistical Language Modelling. ELSNET Summer School 2000: 78-105 - [c7]Yoshihiko Gotoh, Steve Renals:
Variable word rate N-grams. ICASSP 2000: 1591-1594 - [i2]Yoshihiko Gotoh, Steve Renals:
Variable Word Rate N-grams. CoRR cs.CL/0003081 (2000) - [i1]Yoshihiko Gotoh, Steve Renals:
Information Extraction from Broadcast News. CoRR cs.CL/0003084 (2000)
1990 – 1999
- 1999
- [j4]Yoshihiko Gotoh, Steve Renals:
Topic-based mixture language modelling. Nat. Lang. Eng. 5(4): 355-375 (1999) - [c6]Yoshihiko Gotoh, Steve Renals, Gethin Williams:
Named entity tagged language models. ICASSP 1999: 513-516 - [c5]Steve Renals, Yoshihiko Gotoh:
Integrated transcription and identification of named entities in broadcast speech. EUROSPEECH 1999: 1039-1042 - 1998
- [j3]Yoshihiko Gotoh, Michael M. Hochberg, Harvey F. Silverman:
Efficient training algorithms for HMMs using incremental estimation. IEEE Trans. Speech Audio Process. 6(6): 539-548 (1998) - 1997
- [c4]Yoshihiko Gotoh, Steve Renals:
Document space models using latent semantic analysis. EUROSPEECH 1997: 1443-1446 - 1996
- [j2]Eugene Charniak, Glenn Carroll, John E. Adcock, Anthony R. Cassandra, Yoshihiko Gotoh, Jeremy Katz, Michael L. Littman, John McCann:
Taggers for Parsers. Artif. Intell. 85(1-2): 45-57 (1996) - [j1]Daniel J. Mashao, Yoshihiko Gotoh, Harvey F. Silverman:
Analysis of LPC/DFT features for an HMM-based alphadigit recognizer. IEEE Signal Process. Lett. 3(4): 103-106 (1996) - [c3]Yoshihiko Gotoh, Harvey F. Silverman:
Incremental ML estimation of HMM parameters for efficient training. ICASSP 1996: 585-588 - [c2]John E. Adcock, Yoshihiko Gotoh, Daniel J. Mashao, Harvey F. Silverman:
Microphone-array speech recognition via incremental map training. ICASSP 1996: 897-900 - 1994
- [c1]Yoshihiko Gotoh, Michael M. Hochberg, Harvey F. Silverman:
Using MAP estimated parameters to improve HMM speech recognition performance. ICASSP (1) 1994: 229-232
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-09-19 23:45 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint