Loughborough University
Leicestershire, UK
LE11 3TU
+44 (0)1509 263171
Loughborough University

Loughborough University Institutional Repository

Please use this identifier to cite or link to this item: https://dspace.lboro.ac.uk/2134/23145

Title: Effective recognition of facial micro-expressions with video motion magnification
Authors: Wang, Yandan
See, John
Oh, Yee-Hui
Phan, Raphael C.-W.
Rahulamathavan, Yogachandran
Ling, Huo-Chong
Tan, Su-Wei
Li, Xujie
Keywords: Micro-expressions
Motion magnification
Local binary patterns
Issue Date: 2016
Publisher: © Springer
Citation: WANG, Y. ... et al, 2016. Effective recognition of facial micro-expressions with video motion magnification. Multimedia Tools and Applications, 76 (20), pp. 21665–21690.
Abstract: Facial expression recognition has been intensively studied for decades, notably by the psychology community and more recently the pattern recognition community. What is more challenging, and the subject of more recent research, is the problem of recognizing subtle emotions exhibited by so-called micro-expressions. Recognizing a micro-expression is substantially more challenging than conventional expression recognition because these micro-expressions are only temporally exhibited in a fraction of a second and involve minute spatial changes. Until now, work in this field is at a nascent stage, with only a few existing micro-expression databases and methods. In this article, we propose a new micro-expression recognition approach based on the Eulerian motion magnification technique, which could reveal the hidden information and accentuate the subtle changes in micro-expression motion. Validation of our proposal was done on the recently proposed CASME II dataset in comparison with baseline and state-of-the-art methods. We achieve a good recognition accuracy of up to 75.30% by using leave-one-out cross validation evaluation protocol. Extensive experiments on various factors at play further demonstrate the effectiveness of our proposed approach.
Description: The final publication is available at link.springer.com via http://dx.doi.org/10.1007/s11042-016-4079-6.
Sponsor: This work is supported by the TM Grant under project UbeAware and 2beAware,and Zhejiang Provincial Natural Science Foundation of China (Grant Nos. LQ14F020006).
Version: Accepted for publication
DOI: 10.1007/s11042-016-4079-6
URI: https://dspace.lboro.ac.uk/2134/23145
Publisher Link: http://dx.doi.org/10.1007/s11042-016-4079-6
ISSN: 1380-7501
Appears in Collections:Published Articles (Loughborough University London)

Files associated with this item:

File Description SizeFormat
vidmotionmag_mtap16.pdfAccepted version870.2 kBAdobe PDFView/Open


SFX Query

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.