default search action
International Journal of Biometrics, Volume 16
Volume 16, Number 1, 2024
- Anis Mezghani, Monji Kherallah:
Arabic offline writer identification on a new version of AHTID/MW database. 1-15 - Sukhpreet Kaur, Nilima Kulkarni:
Recent trends and challenges in human computer interaction using automatic emotion recognition: a review. 16-43 - R. Sreemol, M. B. Santosh Kumar, A. Sreekumar:
A secure finger vein recognition system using WS-progressive GAN and C4 classifier. 44-67 - G. Padmashree, A. K. Karunakar:
Exemplar-based facial attribute manipulation: a review. 68-111
Volume 16, Number 2, 2024
- Divine Senanu Ametefe, Suzi Seroja Sarnin, Darmawaty Mohd Ali, Dah John, Abdulmalik Adozuka Aliu:
Fingerprint multiple-class classifier: performance evaluation on known and unknown fingerprint spoofing materials. 113-132 - Soumen Roy, Devadatta Sinha, Rajat Kumar Pal, Utpal Roy:
A unique approach towards keystroke dynamics-based entry-point user access control. 133-157 - C. D. Anjana, C. V. Priyatha, M. S. Siva Prasad:
A comparative study on friction ridge pore features of males and females. 158-175 - Eiman A. Alhamad, Mohammed S. Al Logmani, Abdullah T. Al-Essa, Mohammad Hammoudeh:
A minutiae-based method to store and compare fingerprints. 176-194 - P. Akhila, Shashidhar G. Koolagudi:
Latent fingerprint segmentation using multi-scale attention U-Net. 195-215
Volume 16, Numbers 3/4, 2024
- Anurag Tewari, Prabhat Verma:
Identity authentication model from continuous keystroke pattern using CSO and LSTM network. 217-235 - Xiaoguang Jiang:
Offline handwritten signature recognition based on generative adversarial networks. 236-255 - Zhiqiang Li:
A method for recognising wrong actions of martial arts athletes based on keyframe extraction. 256-271 - MingHui Zhu, Peng-Cheng Huang, JiaYong Zhang:
Speech endpoint detection method based on logarithmic energy entropy product of adaptive sub-bands in low signal-to-noise ratio environments. 272-286 - Shaowu Huang:
A sparse representation-based local occlusion recognition method for athlete expressions. 287-299 - Xia Zhu:
Recognition of starting movement correction for long distance runners based on human key point detection. 300-316 - Song Liu:
Tennis players' hitting action recognition method based on multimodal data. 317-336 - Xiaoguang Jiang:
Chinese named entity recognition method based on multiscale feature fusion. 337-349 - Ruijing Ma:
An online learning behaviour recognition method based on tag set correlation learning. 350-363 - Yang Yang:
Accurate facial expression recognition method based on perceptual hash algorithm. 364-380 - Qin Yang, Zhenhua Zhou:
Multi-modal human motion recognition based on behaviour tree. 381-398 - Soichiro Yokoo, Nobuyuki Nishiuchi, Kimihiro Yamanaka:
Classification of visual attention by microsaccades using machine learning. 399-418
Volume 16, Number 5, 2024
- Zhenyu Zhu:
Identifying illegal actions method of basketball players based on improved genetic algorithm. 419-430 - GaoFeng Han, Yuanquan Zhong:
A multistate pedestrian target recognition and tracking algorithm in public places based on Camshift algorithm. 431-448 - Li Wang:
Rapid recognition of athlete's anxiety emotion based on multimodal fusion. 449-462 - Yi Tang, Jiaojun Yi, Feigang Tan:
Facial micro-expression recognition method based on CNN and transformer mixed model. 463-477 - Haochen Xu, Zhiqiang Zhu:
Athlete facial micro-expression recognition method based on graph convolutional neural network. 478-496 - Wenjia Wu:
Multimodal emotion detection of tennis players based on deep reinforcement learning. 497-513 - Feigang Tan, Yi Tang, Jiaojun Yi:
Multi-pose face recognition method based on improved depth residual network. 514-532 - Lanbo Xu:
Dynamic emotion recognition of human face based on convolutional neural network. 533-551
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.