default search action
Shogo Matsuno
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2023
- [c29]Kaoru Shimada, Shogo Matsuno, Shota Saito:
Discovery of Contrast Itemset with Statistical Background Between Two Continuous Variables. DaWaK 2023: 114-119 - [c28]Shogo Matsuno:
Detection of Voluntary Eye Movement for Analysis About Eye Gaze Behaviour in Virtual Communication. HCI (43) 2023: 273-279 - [c27]Shogo Matsuno, Sakae Mizuki, Takeshi Sakaki:
Construction of Evaluation Datasets for Trend Forecasting Studies. ICWSM 2023: 1041-1051 - [c26]Shogo Matsuno, Daiki Niikura, Kiyohiko Abe:
Gaze Direction Classification Using Vision Transformer. IIAI-AAI-Winter 2023: 193-198 - 2022
- [j7]Kaoru Shimada, Takaaki Arahira, Shogo Matsuno:
ItemSB: Itemsets with Statistically Distinctive Backgrounds Discovered by Evolutionary Method. Int. J. Semantic Comput. 16(3): 357-378 (2022) - [c25]Shogo Matsuno, Kaoru Shimada:
Evolutionary operation setting for outcome accumulation type evolutionary rule discovery method. GECCO Companion 2022: 451-454 - [c24]Shotaro Kawasaki, Ryosuke Motegi, Shogo Matsuno, Yoichi Seki:
An Evaluation of Large-scale Information Network Embedding based on Latent Space Model Generating Links. ICSIM 2022: 164-170 - 2021
- [j6]Hironobu Sato, Kiyohiko Abe, Shogo Matsuno, Minoru Ohyama:
Blink input interface enabling multiple candidate selection through sound feedback. Artif. Life Robotics 26(3): 312-317 (2021) - [c23]Kaoru Shimada, Takaaki Arahira, Shogo Matsuno:
Evolutionary Method to Discover Itemsets with Statistically Distinctive Backgrounds. AIKE 2021: 113-120 - [c22]Kaoru Shimada, Takaaki Arahira, Shogo Matsuno:
Evolutionary Method for Two-dimensional Associative Local Distribution Rule Mining. ICTAI 2021: 1018-1025 - 2020
- [c21]Shogo Matsuno, Yuya Yamada, Naoaki Itakura, Tota Mizuno:
Examination of Stammering Symptomatic Improvement Training Using Heartbeat-Linked Vibration Stimulation. HCI (17) 2020: 223-232 - [c20]Shogo Matsuno, Minoru Ohyama, Hironobu Sato, Kiyohiko Abe:
Classification of Intentional Eye-blinks using Integration Values of Eye-blink Waveform. SMC 2020: 1255-1261 - [c19]Shogo Matsuno, Sakae Mizuki, Takeshi Sakaki:
Improved Advertisement Targeting via Fine-grained Location Prediction using Twitter. WWW (Companion Volume) 2020: 527-532
2010 – 2019
- 2019
- [j5]Shogo Matsuno, Susumu Chida, Naoaki Itakura, Tota Mizuno, Kazuyuki Mito:
A method of character input for the user interface with a low degree of freedom. Artif. Life Robotics 24(2): 250-256 (2019) - [j4]Hironobu Sato, Kiyohiko Abe, Shogo Matsuno, Minoru Ohyama:
Advanced eye-gaze input system with two types of voluntary blinks. Artif. Life Robotics 24(3): 324-331 (2019) - [j3]Shogo Matsuno, Naoaki Itakura, Tota Mizuno:
An analysis method for eye motion and eye blink detection from colour images around ocular region. Int. J. Space Based Situated Comput. 9(1): 22-30 (2019) - [c18]Shogo Matsuno, Hironobu Sato, Kiyohiko Abe, Minoru Ohyama:
Expanding the Freedom of Eye-gaze Input Interface using Round-Trip Eye Movement under HMD Environment. ICAT-EGVE (Posters and Demos) 2019: 21-22 - [c17]Yuki Watanabe, Reiji Suzumura, Shogo Matsuno, Minoru Ohyama:
Investigation of Context-aware System Using Activity Recognition. ICAIIC 2019: 287-291 - [c16]Shogo Matsuno, Naoaki Itakura, Tota Mizuno, Kazuyuki Mito:
Examination of multi-optioning for cVEP-based BCI by fluctuation of indicator lighting intervals and luminance. SMC 2019: 2743-2747 - 2018
- [j2]Shogo Matsuno, Tota Mizuno, Hirotoshi Asano, Kazuyuki Mito, Naoaki Itakura:
Estimating autonomic nerve activity using variance of thermal face images. Artif. Life Robotics 23(3): 367-372 (2018) - [j1]Yuki Oguri, Shogo Matsuno, Minoru Ohyama:
Recognition of a variety of activities considering smartphone positions. Int. J. Space Based Situated Comput. 8(2): 88-95 (2018) - [c15]Reiji Suzumura, Shogo Matsuno, Minoru Ohyama:
Where Can we Accomplish our To-Do?: Estimating the Target Location by Analyzing the Task. AINA 2018: 457-463 - [c14]Shogo Matsuno, Masatoshi Tanaka, Keisuke Yoshida, Kota Akehi, Naoaki Itakura, Tota Mizuno, Kazuyuki Mito:
Discrimination of Eye Blinks and Eye Movements as Features for Image Analysis of the Around Ocular Region for Use as an Input Interface. IMIS 2018: 171-182 - [c13]Shogo Matsuno, Reiji Suzumura, Minoru Ohyama:
Tourist Support System Using User Context Obtained from a Personal Information Device. UMAP (Adjunct Publication) 2018: 91-95 - 2017
- [c12]Shogo Matsuno, Masatoshi Tanaka, Keisuke Yoshida, Kota Akehi, Naoaki Itakura, Tota Mizuno, Kazuyuki Mito:
Automatic Classification of Eye Blinks and Eye Movements for an Input Interface Using Eye Motion. HCI (29) 2017: 164-169 - [c11]Tota Mizuno, Shogo Matsuno, Kota Akehi, Kazuyuki Mito, Naoaki Itakura, Hirotoshi Asano:
Development of Device for Measurement of Skin Potential by Grasping of the Device. HCI (29) 2017: 237-242 - [c10]Tomoyuki Murata, Shogo Matsuno, Kazuyuki Mito, Naoaki Itakura, Tota Mizuno:
Investigation of Facial Region Extraction Algorithm Focusing on Temperature Distribution Characteristics of Facial Thermal Images. HCI (29) 2017: 347-352 - [c9]Yuki Oguri, Shogo Matsuno, Minoru Ohyama:
Activity Estimation Using Device Positions of Smartphone Users. NBiS 2017: 1126-1135 - 2016
- [c8]Shogo Matsuno, Takahiro Terasaki, Shogo Aizawa, Tota Mizuno, Kazuyuki Mito, Naoaki Itakura:
Physiological and Psychological Evaluation by Skin Potential Activity Measurement Using Steering Wheel While Driving. HCI (27) 2016: 177-181 - [c7]Masatoshi Tanaka, Keisuke Yoshida, Shogo Matsuno, Minoru Ohyama:
Advancement of a To-Do Reminder System Focusing on Context of the User. HCI (27) 2016: 385-391 - [c6]Shogo Matsuno, Yuta Ito, Naoaki Itakura, Tota Mizuno, Kazuyuki Mito:
A Study of an Intention Communication Assisting System Using Eye Movement. ICCHP (2) 2016: 495-502 - 2015
- [c5]Shogo Matsuno, Kota Akehi, Naoaki Itakura, Tota Mizuno, Kazuyuki Mito:
Computer Input System Using Eye Glances. HCI (4) 2015: 425-432 - [c4]Kiyohiko Abe, Hironobu Sato, Shogo Matsuno, Shoichi Ohi, Minoru Ohyama:
Input Interface Using Eye-Gaze and Blink Information. HCI (27) 2015: 463-467 - 2014
- [c3]Shogo Matsuno, Naoaki Itakura, Minoru Ohyama, Shoichi Ohi, Kiyohiko Abe:
Analysis of trends in the occurrence of eyeblinks for an eyeblink input interface. UsARE@RE 2014: 25-31 - [c2]Shogo Matsuno, Minoru Ohyama, Kiyohiko Abe, Shoichi Ohi, Naoaki Itakura:
Differentiating Conscious and Unconscious Eyeblinks for Development of Eyeblink Computer Input System. UsARE (Revised Selected Papers) 2014: 160-174 - 2013
- [c1]Kiyohiko Abe, Hironobu Sato, Shogo Matsuno, Shoichi Ohi, Minoru Ohyama:
Automatic Classification of Eye Blink Types Using a Frame-Splitting Method. HCI (16) 2013: 117-124
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-04-29 20:27 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint