Stop the war!
Остановите войну!
for scientists:
default search action
Michel F. Valstar
Michel François Valstar
Person information
- affiliation: University of Nottingham, School of Computer Science
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j27]Siyang Song, Yiming Luo, Tugba Tümer, Changzeng Fu, Michel F. Valstar, Hatice Gunes:
Loss Relaxation Strategy for Noisy Facial Video-based Automatic Depression Recognition. ACM Trans. Comput. Heal. 5(2): 12:1-12:24 (2024) - [j26]Mani Kumar Tellamekala, Shahin Amiriparian, Björn W. Schuller, Elisabeth André, Timo Giesbrecht, Michel F. Valstar:
COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for Uncertainty-Aware Multimodal Emotion Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 46(2): 805-822 (2024) - [j25]Jonathan Gratch, Gretchen Greene, Rosalind W. Picard, Lachlan Urquhart, Michel F. Valstar:
Guest Editorial: Ethics in Affective Computing. IEEE Trans. Affect. Comput. 15(1): 1-3 (2024) - [j24]Mani Kumar Tellamekala, Ömer Sümer, Björn W. Schuller, Elisabeth André, Timo Giesbrecht, Michel F. Valstar:
Are 3D Face Shapes Expressive Enough for Recognising Continuous Emotions and Action Unit Intensities? IEEE Trans. Affect. Comput. 15(2): 535-548 (2024) - [c100]Siyang Song, Micol Spitale, Cheng Luo, Cristina Palmero, Germán Barquero, Hengde Zhu, Sergio Escalera, Michel F. Valstar, Tobias Baur, Fabien Ringeval, Elisabeth André, Hatice Gunes:
REACT 2024: the Second Multiple Appropriate Facial Reaction Generation Challenge. FG 2024: 1-5 - [i24]Siyang Song, Micol Spitale, Cheng Luo, Cristina Palmero, Germán Barquero, Hengde Zhu, Sergio Escalera, Michel F. Valstar, Tobias Baur, Fabien Ringeval, Elisabeth André, Hatice Gunes:
REACT 2024: the Second Multiple Appropriate Facial Reaction Generation Challenge. CoRR abs/2401.05166 (2024) - 2023
- [j23]Siyang Song, Shashank Jaiswal, Enrique Sánchez-Lozano, Georgios Tzimiropoulos, Linlin Shen, Michel F. Valstar:
Self-Supervised Learning of Person-Specific Facial Dynamics for Automatic Personality Recognition. IEEE Trans. Affect. Comput. 14(1): 178-195 (2023) - [j22]Ioanna Ntinou, Enrique Sánchez-Lozano, Adrian Bulat, Michel F. Valstar, Georgios Tzimiropoulos:
A Transfer Learning Approach to Heatmap Regression for Action Unit Intensity Estimation. IEEE Trans. Affect. Comput. 14(1): 436-450 (2023) - [j21]Mani Kumar Tellamekala, Timo Giesbrecht, Michel F. Valstar:
Modelling Stochastic Context of Audio-Visual Expressive Behaviour With Affective Processes. IEEE Trans. Affect. Comput. 14(3): 2290-2303 (2023) - [j20]Siyang Song, Zilong Shao, Shashank Jaiswal, Linlin Shen, Michel F. Valstar, Hatice Gunes:
Learning Person-Specific Cognition From Facial Reactions for Automatic Personality Recognition. IEEE Trans. Affect. Comput. 14(4): 3048-3065 (2023) - [c99]Siyang Song, Micol Spitale, Cheng Luo, Germán Barquero, Cristina Palmero, Sergio Escalera, Michel F. Valstar, Tobias Baur, Fabien Ringeval, Elisabeth André, Hatice Gunes:
REACT2023: The First Multiple Appropriate Facial Reaction Generation Challenge. ACM Multimedia 2023: 9620-9624 - [i23]Siyang Song, Micol Spitale, Cheng Luo, Germán Barquero, Cristina Palmero, Sergio Escalera, Michel F. Valstar, Tobias Baur, Fabien Ringeval, Elisabeth André, Hatice Gunes:
REACT2023: the first Multi-modal Multiple Appropriate Facial Reaction Generation Challenge. CoRR abs/2306.06583 (2023) - 2022
- [j19]Siyang Song, Shashank Jaiswal, Linlin Shen, Michel F. Valstar:
Spectral Representation of Behaviour Primitives for Depression Analysis. IEEE Trans. Affect. Comput. 13(2): 829-844 (2022) - [j18]Mani Kumar Tellamekala, Timo Giesbrecht, Michel F. Valstar:
Dimensional Affect Uncertainty Modelling for Apparent Personality Recognition. IEEE Trans. Affect. Comput. 13(4): 2144-2155 (2022) - [c98]Vincent Karas, Mani Kumar Tellamekala, Adria Mallol-Ragolta, Michel F. Valstar, Björn W. Schuller:
Time-Continuous Audiovisual Fusion with Recurrence vs Attention for In-The-Wild Affect Recognition. CVPR Workshops 2022: 2381-2390 - [i22]Vincent Karas, Mani Kumar Tellamekala, Adria Mallol-Ragolta, Michel F. Valstar, Björn W. Schuller:
Continuous-Time Audiovisual Fusion with Recurrence vs. Attention for In-The-Wild Affect Recognition. CoRR abs/2203.13285 (2022) - [i21]Mani Kumar Tellamekala, Shahin Amiriparian, Björn W. Schuller, Elisabeth André, Timo Giesbrecht, Michel F. Valstar:
COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for Uncertainty-Aware Multimodal Emotion Recognition. CoRR abs/2206.05833 (2022) - [i20]Mani Kumar Tellamekala, Ömer Sümer, Björn W. Schuller, Elisabeth André, Timo Giesbrecht, Michel F. Valstar:
Are 3D Face Shapes Expressive Enough for Recognising Continuous Emotions and Action Unit Intensities? CoRR abs/2207.01113 (2022) - 2021
- [c97]Gabriel Haddon-Hill, Keerthy Kusumam, Michel F. Valstar:
A simple baseline for evaluating Expression Transfer and Anonymisation in Video Transfer. ACII (Workshops and Demos) 2021: 1-8 - [c96]Maria J. Galvez Trigo, Martin Porcheron, Joy Egede, Joel E. Fischer, Adrian Hazzard, Chris Greenhalgh, Edgar Bodiaj, Michel F. Valstar:
ALTCAI: Enabling the Use of Embodied Conversational Agents to Deliver Informal Health Advice during Wizard of Oz Studies. CUI 2021: 26:1-26:5 - [c95]Enrique Sanchez, Mani Kumar Tellamekala, Michel F. Valstar, Georgios Tzimiropoulos:
Affective Processes: Stochastic Modelling of Temporal Context for Emotion and Facial Expression Recognition. CVPR 2021: 9074-9084 - [c94]Mani Kumar Tellamekala, Timo Giesbrecht, Michel F. Valstar:
Apparent Personality Recognition from Uncertainty-Aware Facial Emotion Predictions using Conditional Latent Variable Models. FG 2021: 1-8 - [c93]Stepan Romanov, Heda Song, Michel F. Valstar, Don Sharkey, Caz Henry, Isaac Triguero, Mercedes Torres Torres:
Few-Shot Learning for Postnatal Gestational Age Estimation. IJCNN 2021: 1-8 - [c92]Mani Kumar Tellamekala, Enrique Sanchez, Georgios Tzimiropoulos, Timo Giesbrecht, Michel F. Valstar:
Stochastic Process Regression for Cross-Cultural Speech Emotion Recognition. Interspeech 2021: 3390-3394 - [c91]Joy O. Egede, Dominic Price, Deepa B. Krishnan, Shashank Jaiswal, Natasha Elliot, Richard F. Morriss, Maria J. Galvez Trigo, Neil Nixon, Peter Liddle, Christopher Greenhalgh, Michel F. Valstar:
Design and Evaluation of Virtual Human Mediated Tasks for Assessment of Depression and Anxiety. IVA 2021: 52-59 - [c90]Joy Egede, Maria J. Galvez Trigo, Adrian Hazzard, Martin Porcheron, Edgar Bodiaj, Joel E. Fischer, Chris Greenhalgh, Michel F. Valstar:
Designing an Adaptive Embodied Conversational Agent for Health Literacy: a User Study. IVA 2021: 112-119 - [c89]Zilong Shao, Siyang Song, Shashank Jaiswal, Linlin Shen, Michel F. Valstar, Hatice Gunes:
Personality Recognition by Modelling Person-specific Cognitive Processes using Graph Representation. ACM Multimedia 2021: 357-366 - [i19]Enrique Sanchez, Mani Kumar Tellamekala, Michel F. Valstar, Georgios Tzimiropoulos:
Affective Processes: stochastic modelling of temporal context for emotion and facial expression recognition. CoRR abs/2103.13372 (2021) - [i18]Siyang Song, Zilong Shao, Shashank Jaiswal, Linlin Shen, Michel F. Valstar, Hatice Gunes:
Learning Graph Representation of Person-specific Cognitive Processes from Audio-visual Behaviours for Automatic Personality Recognition. CoRR abs/2110.13570 (2021) - [i17]Jiaqi Xu, Siyang Song, Keerthy Kusumam, Hatice Gunes, Michel F. Valstar:
Two-stage Temporal Modelling Framework for Video-based Depression Recognition using Graph Representation. CoRR abs/2111.15266 (2021) - 2020
- [j17]Tobias Baur, Alexander Heimerl, Florian Lingenfelser, Johannes Wagner, Michel F. Valstar, Björn W. Schuller, Elisabeth André:
eXplainable Cooperative Machine Learning with NOVA. Künstliche Intell. 34(2): 143-164 (2020) - [c88]Martin Porcheron, Joel E. Fischer, Michel F. Valstar:
NottReal: A Tool for Voice-based Wizard of Oz studies. CIU 2020: 35:1-35:3 - [c87]Enrique Sanchez, Michel F. Valstar:
A recurrent cycle consistency loss for progressive face-to-face synthesis. FG 2020: 53-60 - [c86]Joy O. Egede, Siyang Song, Temitayo A. Olugbade, Chongyang Wang, Amanda C. de C. Williams, Hongying Meng, Min S. Hane Aung, Nicholas D. Lane, Michel F. Valstar, Nadia Bianchi-Berthouze:
EMOPAIN Challenge 2020: Multimodal Pain Evaluation from Facial and Bodily Expressions. FG 2020: 849-856 - [c85]Siyang Song, Enrique Sanchez, Linlin Shen, Michel F. Valstar:
Self-supervised learning of Dynamic Representations for Static Images. ICPR 2020: 1619-1626 - [c84]Mani Kumar Tellamekala, Michel F. Valstar, Michael P. Pound, Timo Giesbrecht:
Audio-Visual Predictive Coding for Self-Supervised Visual Representation Learning. ICPR 2020: 9912-9919 - [i16]Joy Egede, Temitayo A. Olugbade, Chongyang Wang, Siyang Song, Nadia Berthouze, Michel F. Valstar, Amanda C. de C. Williams, Hongying Meng, Min Hane Aung, Nicholas D. Lane:
EMOPAIN Challenge 2020: Multimodal Pain Evaluation from Facial and Bodily Expressions. CoRR abs/2001.07739 (2020) - [i15]Ioanna Ntinou, Enrique Sanchez, Adrian Bulat, Michel F. Valstar, Georgios Tzimiropoulos:
A Transfer Learning approach to Heatmap Regression for Action Unit intensity estimation. CoRR abs/2004.06657 (2020) - [i14]Enrique Sanchez, Michel F. Valstar:
A recurrent cycle consistency loss for progressive face-to-face synthesis. CoRR abs/2004.07165 (2020)
2010 – 2019
- 2019
- [j16]Mercedes Torres Torres, Michel F. Valstar, Caroline Henry, Carole Ward, Don Sharkey:
Postnatal gestational age estimation of newborns using Small Sample Deep Learning. Image Vis. Comput. 83-84: 87-99 (2019) - [j15]Brais Martínez, Michel F. Valstar, Bihan Jiang, Maja Pantic:
Automatic Analysis of Facial Actions: A Survey. IEEE Trans. Affect. Comput. 10(3): 325-347 (2019) - [c83]Joy Egede, Michel F. Valstar, Mercedes Torres Torres, Don Sharkey:
Automatic Neonatal Pain Estimation: An Acute Pain in Neonates Database. ACII 2019: 1-7 - [c82]Shashank Jaiswal, Siyang Song, Michel F. Valstar:
Automatic prediction of Depression and Anxiety from behaviour and personality attributes. ACII 2019: 1-7 - [c81]Mani Kumar Tellamekala, Michel F. Valstar:
Temporally Coherent Visual Representations for Dimensional Affect Recognition. ACII 2019: 1-7 - [c80]Siyang Song, Enrique Sánchez-Lozano, Mani Kumar Tellamekala, Linlin Shen, Alan Johnston, Michel F. Valstar:
Dynamic Facial Models for Video-Based Dimensional Affect Estimation. ICCV Workshops 2019: 1608-1617 - [c79]Thomas J. Smith, Michel F. Valstar, Don Sharkey, John A. Crowe:
Clinical Scene Segmentation with Tiny Datasets. ICCV Workshops 2019: 1637-1645 - [c78]Shashank Jaiswal, Michel F. Valstar, Keerthy Kusumam, Chris Greenhalgh:
Virtual Human Questionnaire for Analysis of Depression, Anxiety and Personality. IVA 2019: 81-87 - [c77]Fabien Ringeval, Björn W. Schuller, Michel F. Valstar, Nicholas Cummins, Roddy Cowie, Leili Tavabi, Maximilian Schmitt, Sina Alisamir, Shahin Amiriparian, Eva-Maria Meßner, Siyang Song, Shuo Liu, Ziping Zhao, Adria Mallol-Ragolta, Zhao Ren, Mohammad Soleymani, Maja Pantic:
AVEC 2019 Workshop and Challenge: State-of-Mind, Detecting Depression with AI, and Cross-Cultural Affect Recognition. AVEC@MM 2019: 3-12 - [c76]Fabien Ringeval, Björn W. Schuller, Michel F. Valstar, Nicholas Cummins, Roddy Cowie, Maja Pantic:
AVEC'19: Audio/Visual Emotion Challenge and Workshop. ACM Multimedia 2019: 2718-2719 - [p2]Michel F. Valstar:
Multimodal databases. The Handbook of Multimodal-Multisensor Interfaces, Volume 3 (3) 2019 - [e8]Fabien Ringeval, Björn W. Schuller, Michel F. Valstar, Nicholas Cummins, Roddy Cowie, Maja Pantic:
Proceedings of the 9th International on Audio/Visual Emotion Challenge and Workshop, AVEC@MM 2019, Nice, France, October 21-25, 2019. ACM 2019, ISBN 978-1-4503-6913-8 [contents] - [i13]Siyang Song, Enrique Sánchez-Lozano, Linlin Shen, Alan Johnston, Michel F. Valstar:
Inferring Dynamic Representations of Facial Actions from a Still Image. CoRR abs/1904.02382 (2019) - [i12]Fabien Ringeval, Björn W. Schuller, Michel F. Valstar, Nicholas Cummins, Roddy Cowie, Leili Tavabi, Maximilian Schmitt, Sina Alisamir, Shahin Amiriparian, Eva-Maria Meßner, Siyang Song, Shuo Liu, Ziping Zhao, Adria Mallol-Ragolta, Zhao Ren, Mohammad Soleymani, Maja Pantic:
AVEC 2019 Workshop and Challenge: State-of-Mind, Detecting Depression with AI, and Cross-Cultural Affect Recognition. CoRR abs/1907.11510 (2019) - 2018
- [j14]Enrique Sánchez-Lozano, Georgios Tzimiropoulos, Brais Martínez, Fernando De la Torre, Michel F. Valstar:
A Functional Regression Approach to Facial Landmark Tracking. IEEE Trans. Pattern Anal. Mach. Intell. 40(9): 2037-2050 (2018) - [j13]Sergio Escalera, Xavier Baró, Isabelle Guyon, Hugo Jair Escalante, Georgios Tzimiropoulos, Michel F. Valstar, Maja Pantic, Jeffrey F. Cohn, Takeo Kanade:
Guest Editorial: The Computational Face. IEEE Trans. Pattern Anal. Mach. Intell. 40(11): 2541-2545 (2018) - [j12]Fabien Ringeval, Björn W. Schuller, Michel F. Valstar, Jonathan Gratch, Roddy Cowie, Maja Pantic:
Introduction to the Special Section on Multimedia Computing and Applications of Socio-Affective Behaviors in the Wild. ACM Trans. Multim. Comput. Commun. Appl. 14(1s): 25:1-25:2 (2018) - [c75]Enrique Sánchez-Lozano, Georgios Tzimiropoulos, Michel F. Valstar:
Joint Action Unit localisation and intensity estimation through heatmap regression. BMVC 2018: 233 - [c74]Harry J. Witchel, Harry L. Claxton, Daisy C. Holmes, Thomas T. Ranji, Joe D. Chalkley, Carlos P. Santos, Carina E. I. Westling, Michel F. Valstar, Matt Celuszak, Patrick Fagan:
A trigger-substrate model for smiling during an automated formative quiz: engagement is the substrate, not frustration. ECCE 2018: 24:1-24:4 - [c73]Siyang Song, Linlin Shen, Michel F. Valstar:
Human Behaviour-Based Automatic Depression Analysis Using Hand-Crafted Statistics and Deep Learned Spectral Features. FG 2018: 158-165 - [c72]Doratha Vinkemeier, Michel F. Valstar, Jonathan Gratch:
Predicting Folds in Poker Using Action Unit Detectors and Decision Trees. FG 2018: 504-511 - [c71]Shashank Jaiswal, Joy Egede, Michel F. Valstar:
Deep Learned Cumulative Attribute Regression. FG 2018: 715-722 - [c70]Siyang Song, Shuimei Zhang, Björn W. Schuller, Linlin Shen, Michel F. Valstar:
Noise Invariant Frame Selection: A Simple Method to Address the Background Noise Problem for Text-independent Speaker Verification. IJCNN 2018: 1-8 - [c69]Fabien Ringeval, Björn W. Schuller, Michel F. Valstar, Roddy Cowie, Heysem Kaya, Maximilian Schmitt, Shahin Amiriparian, Nicholas Cummins, Denis Lalanne, Adrien Michaud, Elvan Çiftçi, Hüseyin Güleç, Albert Ali Salah, Maja Pantic:
AVEC 2018 Workshop and Challenge: Bipolar Disorder and Cross-Cultural Affect Recognition. AVEC@MM 2018: 3-13 - [c68]Fabien Ringeval, Björn W. Schuller, Michel F. Valstar, Roddy Cowie, Maja Pantic:
Summary for AVEC 2018: Bipolar Disorder and Cross-Cultural Affect Recognition. ACM Multimedia 2018: 2111-2112 - [e7]Fabien Ringeval, Björn W. Schuller, Michel F. Valstar, Roddy Cowie, Maja Pantic:
Proceedings of the 2018 on Audio/Visual Emotion Challenge and Workshop, AVEC@MM 2018, Seoul, Republic of Korea, October 22, 2018. ACM 2018, ISBN 978-1-4503-5983-2 [contents] - [i11]Johannes Wagner, Tobias Baur, Yue Zhang, Michel F. Valstar, Björn W. Schuller, Elisabeth André:
Applying Cooperative Machine Learning to Speed Up the Annotation of Social Signals in Large Multi-modal Corpora. CoRR abs/1802.02565 (2018) - [i10]Siyang Song, Shuimei Zhang, Björn W. Schuller, Linlin Shen, Michel F. Valstar:
Noise Invariant Frame Selection: A Simple Method to Address the Background Noise Problem for Text-independent Speaker Verification. CoRR abs/1805.01259 (2018) - [i9]Enrique Sánchez-Lozano, Georgios Tzimiropoulos, Michel F. Valstar:
Joint Action Unit localisation and intensity estimation through heatmap regression. CoRR abs/1805.03487 (2018) - [i8]Enrique Sanchez, Michel F. Valstar:
Triple consistency loss for pairing distributions in GAN-based face synthesis. CoRR abs/1811.03492 (2018) - 2017
- [c67]Mercedes Torres, Michel F. Valstar, Caroline Henry, Carole Ward, Don Sharkey:
Small Sample Deep Learning for Newborn Gestational Age Estimation. FG 2017: 79-86 - [c66]Joy Egede, Michel F. Valstar, Brais Martínez:
Fusing Deep Learned and Hand-Crafted Features of Appearance, Shape, and Dynamics for Automatic Pain Estimation. FG 2017: 689-696 - [c65]Shashank Jaiswal, Michel F. Valstar, Alinda Gillott, David Daley:
Automatic Detection of ADHD and ASD from Expressive Behaviour in RGBD Data. FG 2017: 762-769 - [c64]Michel F. Valstar, Enrique Sánchez-Lozano, Jeffrey F. Cohn, László A. Jeni, Jeffrey M. Girard, Zheng Zhang, Lijun Yin, Maja Pantic:
FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge. FG 2017: 839-847 - [c63]Joy O. Egede, Michel F. Valstar:
Cumulative attributes for pain intensity estimation. ICMI 2017: 146-153 - [c62]Angelo Cafaro, Johannes Wagner, Tobias Baur, Soumia Dermouche, Mercedes Torres, Catherine Pelachaud, Elisabeth André, Michel F. Valstar:
The NoXi database: multimodal recordings of mediated novice-expert interactions. ICMI 2017: 350-359 - [c61]Fabien Ringeval, Björn W. Schuller, Michel F. Valstar, Jonathan Gratch, Roddy Cowie, Stefan Scherer, Sharon Mozgai, Nicholas Cummins, Maximilian Schmitt, Maja Pantic:
AVEC 2017: Real-life Depression, and Affect Recognition Workshop and Challenge. AVEC@ACM Multimedia 2017: 3-9 - [c60]Fabien Ringeval, Björn W. Schuller, Michel F. Valstar, Jonathan Gratch, Roddy Cowie, Maja Pantic:
Summary for AVEC 2017: Real-life Depression and Affect Challenge and Workshop. ACM Multimedia 2017: 1963-1964 - [p1]Michel F. Valstar, Stefanos Zafeiriou, Maja Pantic:
Facial Actions as Social Signals. Social Signal Processing 2017: 123-154 - [e6]Fabien Ringeval, Björn W. Schuller, Michel F. Valstar, Jonathan Gratch, Roddy Cowie, Maja Pantic:
Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge, Mountain View, CA, USA, October 23 - 27, 2017. ACM 2017, ISBN 978-1-4503-5502-5 [contents] - [i7]Joy Egede, Michel F. Valstar, Brais Martínez:
Fusing Deep Learned and Hand-Crafted Features of Appearance, Shape, and Dynamics for Automatic Pain Estimation. CoRR abs/1701.04540 (2017) - [i6]Michel F. Valstar, Enrique Sánchez-Lozano, Jeffrey F. Cohn, László A. Jeni, Jeffrey M. Girard, Zheng Zhang, Lijun Yin, Maja Pantic:
FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge. CoRR abs/1702.04174 (2017) - 2016
- [j11]Brais Martínez, Michel F. Valstar:
L2, 1-based regression and prediction accumulation across views for robust facial landmark detection. Image Vis. Comput. 47: 36-44 (2016) - [j10]Enrique Sánchez-Lozano, Brais Martínez, Michel F. Valstar:
Cascaded regression with sparsified feature covariance matrix for facial landmark detection. Pattern Recognit. Lett. 73: 19-25 (2016) - [j9]Min S. H. Aung, Sebastian Kaltwang, Bernardino Romera-Paredes, Brais Martínez, Aneesha Singh, Matteo Cella, Michel F. Valstar, Hongying Meng, Andrew Kemp, Moshen Shafizadeh, Aaron C. Elkins, Natalie Kanakam, Amschel de Rothschild, Nick Tyler, Paul J. Watson, Amanda C. de C. Williams, Maja Pantic, Nadia Bianchi-Berthouze:
The Automatic Detection of Chronic Pain-Related Expression: Requirements, Challenges and the Multimodal EmoPain Dataset. IEEE Trans. Affect. Comput. 7(4): 435-451 (2016) - [c59]Andry Chowanda, Peter Blanchfield, Martin Flintham, Michel F. Valstar:
Computational Models of Emotion, Personality, and Social Relationships for Interactions in Games: (Extended Abstract). AAMAS 2016: 1343-1344 - [c58]Sergio Escalera, Mercedes Torres, Brais Martínez, Xavier Baró, Hugo Jair Escalante, Isabelle Guyon, Georgios Tzimiropoulos, Ciprian A. Corneanu, Marc Oliu, Mohammad Ali Bagheri, Michel F. Valstar:
ChaLearn Looking at People and Faces of the World: Face AnalysisWorkshop and Challenge 2016. CVPR Workshops 2016: 706-713 - [c57]Aaron S. Jackson, Michel F. Valstar, Georgios Tzimiropoulos:
A CNN Cascade for Landmark Guided Semantic Part Segmentation. ECCV Workshops (3) 2016: 143-155 - [c56]Enrique Sánchez-Lozano, Brais Martínez, Georgios Tzimiropoulos, Michel F. Valstar:
Cascaded Continuous Regression for Real-Time Incremental Face Tracking. ECCV (8) 2016: 645-661 - [c55]Matej Kristan, Ales Leonardis, Jiri Matas, Michael Felsberg, Roman P. Pflugfelder, Luka Cehovin, Tomás Vojír, Gustav Häger, Alan Lukezic, Gustavo Fernández, Abhinav Gupta, Alfredo Petrosino, Alireza Memarmoghadam, Álvaro García-Martín, Andrés Solís Montero, Andrea Vedaldi, Andreas Robinson, Andy Jinhua Ma, Anton Varfolomieiev, A. Aydin Alatan, Aykut Erdem, Bernard Ghanem, Bin Liu, Bohyung Han, Brais Martínez, Chang-Ming Chang, Changsheng Xu, Chong Sun, Daijin Kim, Dapeng Chen, Dawei Du, Deepak Mishra, Dit-Yan Yeung, Erhan Gundogdu, Erkut Erdem, Fahad Shahbaz Khan, Fatih Porikli, Fei Zhao, Filiz Bunyak, Francesco Battistone, Gao Zhu, Giorgio Roffo, Gorthi R. K. Sai Subrahmanyam, Guilherme Sousa Bastos, Guna Seetharaman, Henry Medeiros, Hongdong Li, Honggang Qi, Horst Bischof, Horst Possegger, Huchuan Lu, Hyemin Lee, Hyeonseob Nam, Hyung Jin Chang, Isabela Drummond, Jack Valmadre, Jae-chan Jeong, Jaeil Cho, Jae-Yeong Lee, Jianke Zhu, Jiayi Feng, Jin Gao, Jin Young Choi, Jingjing Xiao, Ji-Wan Kim, Jiyeoup Jeong, João F. Henriques, Jochen Lang, Jongwon Choi, José M. Martínez, Junliang Xing, Junyu Gao, Kannappan Palaniappan, Karel Lebeda, Ke Gao, Krystian Mikolajczyk, Lei Qin, Lijun Wang, Longyin Wen, Luca Bertinetto, Madan Kumar Rapuru, Mahdieh Poostchi, Mario Edoardo Maresca, Martin Danelljan, Matthias Mueller, Mengdan Zhang, Michael Arens, Michel F. Valstar, Ming Tang, Mooyeol Baek, Muhammad Haris Khan, Naiyan Wang, Nana Fan, Noor Al-Shakarji, Ondrej Miksik, Osman Akin, Payman Moallem, Pedro Senna, Philip H. S. Torr, Pong C. Yuen, Qingming Huang, Rafael Martin Nieto, Rengarajan Pelapur, Richard Bowden, Robert Laganière, Rustam Stolkin, Ryan Walsh, Sebastian Bernd Krah, Shengkun Li, Shengping Zhang, Shizeng Yao, Simon Hadfield, Simone Melzi, Siwei Lyu, Siyi Li, Stefan Becker, Stuart Golodetz, Sumithra Kakanuru, Sunglok Choi, Tao Hu, Thomas Mauthner, Tianzhu Zhang, Tony P. Pridmore, Vincenzo Santopietro, Weiming Hu, Wenbo Li, Wolfgang Hübner, Xiangyuan Lan, Xiaomeng Wang, Xin Li, Yang Li, Yiannis Demiris, Yifan Wang, Yuankai Qi, Zejian Yuan, Zexiong Cai, Zhan Xu, Zhenyu He, Zhizhen Chi:
The Visual Object Tracking VOT2016 Challenge Results. ECCV Workshops (2) 2016: 777-823 - [c54]Michel F. Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew P. Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn W. Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, Jelte van Waterschoot:
Ask Alice: an artificial retrieval of information agent. ICMI 2016: 419-420 - [c53]Andry Chowanda, Martin Flintham, Peter Blanchfield, Michel F. Valstar:
Playing with Social and Emotional Game Companions. IVA 2016: 85-95 - [c52]Wenjue Zhu, Andry Chowanda, Michel F. Valstar:
Topic Switch Models for Dialogue Management in Virtual Humans. IVA 2016: 407-411 - [c51]Michel F. Valstar, Jonathan Gratch, Björn W. Schuller, Fabien Ringeval, Denis Lalanne, Mercedes Torres, Stefan Scherer, Giota Stratou, Roddy Cowie, Maja Pantic:
AVEC 2016: Depression, Mood, and Emotion Recognition Workshop and Challenge. AVEC@ACM Multimedia 2016: 3-10 - [c50]Michel F. Valstar, Jonathan Gratch, Björn W. Schuller, Fabien Ringeval, Roddy Cowie, Maja Pantic:
Summary for AVEC 2016: Depression, Mood, and Emotion Recognition Workshop and Challenge. ACM Multimedia 2016: 1483-1484 - [c49]Shashank Jaiswal, Michel F. Valstar:
Deep learning the dynamic appearance and shape of facial action units. WACV 2016: 1-8 - [e5]Michel F. Valstar, Jonathan Gratch, Björn W. Schuller, Fabien Ringeval, Roddy Cowie, Maja Pantic:
Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge, AVEC@MM 2016, Amsterdam, The Netherlands, October 16, 2016. ACM 2016, ISBN 978-1-4503-4516-3 [contents] - [i5]Michel F. Valstar, Jonathan Gratch, Björn W. Schuller, Fabien Ringeval, Denis Lalanne, Mercedes Torres, Stefan Scherer, Giota Stratou, Roddy Cowie, Maja Pantic:
AVEC 2016 - Depression, Mood, and Emotion Recognition Workshop and Challenge. CoRR abs/1605.01600 (2016) - [i4]Enrique Sánchez-Lozano, Brais Martínez, Georgios Tzimiropoulos, Michel F. Valstar:
Cascaded Continuous Regression for Real-time Incremental Face Tracking. CoRR abs/1608.01137 (2016) - [i3]Aaron S. Jackson, Michel F. Valstar, Georgios Tzimiropoulos:
A CNN Cascade for Landmark Guided Semantic Part Segmentation. CoRR abs/1609.09642 (2016) - [i2]Enrique Sánchez-Lozano, Georgios Tzimiropoulos, Brais Martínez, Fernando De la Torre, Michel F. Valstar:
A Functional Regression approach to Facial Landmark Tracking. CoRR abs/1612.02203 (2016) - [i1]Shashank Jaiswal, Michel F. Valstar, Alinda Gillott, David Daley:
Automatic Detection of ADHD and ASD from Expressive Behaviour in RGBD Data. CoRR abs/1612.02374 (2016) - 2015
- [j8]Cynthia Jane Solomon, Michel François Valstar, Richard F. Morriss, John Crowe:
Objective Methods for Reliable Detection of Concealed Depression. Frontiers ICT 2: 5 (2015) - [c48]Marc Schröder, Elisabetta Bevacqua, Roddy Cowie, Florian Eyben, Hatice Gunes, Dirk Heylen, Mark ter Maat, Gary McKeown, Sathish Pammi, Maja Pantic, Catherine Pelachaud, Björn W. Schuller, Etienne de Sevin, Michel F. Valstar, Martin Wöllmer:
Building autonomous sensitive artificial listeners (Extended abstract). ACII 2015: 456-462 - [c47]Shashank Jaiswal, Brais Martínez, Michel François Valstar:
Learning to combine local models for facial Action Unit detection. FG 2015: 1-6 - [c46]Michel François Valstar, Timur R. Almaev, Jeffrey M. Girard, Gary McKeown, Marc Mehu, Lijun Yin, Maja Pantic, Jeffrey F. Cohn:
FERA 2015 - second Facial Expression Recognition and Analysis challenge. FG 2015: 1-8 - [c45]Michel F. Valstar, Gary McKeown, Marc Mehu, Lijun Yin, Maja Pantic, Jeffrey F. Cohn:
FERA 2014 chairs' welcome. FG 2015: iii - [c44]Timur R. Almaev, Brais Martínez, Michel F. Valstar:
Learning to Transfer: Transferring Latent Task Structures and Its Application to Person-Specific Facial Action Unit Detection. ICCV 2015: 3774-3782 - [c43]Xiaomeng Wang, Michel F. Valstar, Brais Martínez, Muhammad Haris Khan, Tony P. Pridmore:
TRIC-track: Tracking by Regression with Incrementally Learned Cascades. ICCV 2015: 4337-4345 - [c42]