We use cookies to improve your experience with our site.

Indexed in:

SCIE, EI, Scopus, INSPEC, DBLP, CSCD, etc.

Submission System
(Author / Reviewer / Editor)
Young-Suk Shin. Facial Expression Recognition of Various Internal States via Manifold Learning[J]. Journal of Computer Science and Technology, 2009, 24(4): 745-752.
Citation: Young-Suk Shin. Facial Expression Recognition of Various Internal States via Manifold Learning[J]. Journal of Computer Science and Technology, 2009, 24(4): 745-752.

Facial Expression Recognition of Various Internal States via Manifold Learning

Funds: This study was supported by research funds from Chosun University, 2008.
More Information
  • Author Bio:

    Young-Suk Shin is an assistant professor in Department ofInformation and Communication Engineering, Chosun University. Shereceived the Ph.D. degree in computer science from Yonsei University,Seoul, in 2001. Her research interests include pattern recognition,biometrics, cognitive modeling, facial expression recognition, emotionrecognition, virtual reality, and human-computer-interaction. From 2008to 2009, she is visiting the Distributed & Collabo-rative VirtualEnvironments Research Laboratory at the University of Ottawa, Canada.

  • Received Date: October 21, 2008
  • Revised Date: April 16, 2009
  • Published Date: July 04, 2009
  • Emotions are becoming increasingly important in human-centered interaction architectures. Recognition of facial expressions, which are central to human-computer interactions, seems natural and desirable. However, facial expressions include mixed emotions, continuous rather than discrete, which vary from moment to moment. This paper represents a novel method of recognizing facial expressions of various internal states via manifold learning, to achieve the aim of human-centered interaction studies. A critical review of widely used emotion models is described, then, facial expression features of various internal states via the locally linear embedding (LLE) are extracted. The recognition of facial expressions is created with the pleasure-displeasure and arousal-sleep dimensions in a two-dimensional model of emotion. The recognition result of various internal state expressions that mapped to the embedding space via the LLE algorithm can effectively represent the structural nature of the two-dimensional model of emotion. Therefore our research has established that the relationship between facial expressions of various internal states can be elaborated in the two-dimensional model of emotion, via the locally linear embedding algorithm.
  • [1]
    Nelson C A. The recognition of facial expressions in thefirst two years of life: Mechanisms of development. Child Development, 1987, 58(4): 889-909.
    [2]
    Bartlett M, Viola P, Sejnowski T, Larsen J, Hager J, Ekman P. Classifying Facial Action. Advances in Neural Information Processing Systems 8. Touretzky D et al. (eds.), Cambridge: MIT Press, MA, 1996.
    [3]
    Essa I, Pentland A. Coding, analysis, interpretation, and recognition of facial expressions. IEEE Trans. Pattern Analysis and Machine Intelligence, 1997, 19(7): 757-763.
    [4]
    Lien J. Automatic recognition of facial expressions using hidden Markov models and estimation of expression intensity
    [Ph.D. Dissertation]. Carnegie Mellon University, 1998.
    [5]
    Oliver N, Pentland A, Berard F. LAFTER: A real-time face and lips tracker with facial expression recognition. Pattern Recognition, 2000, 33(8): 1369-1382.
    [6]
    Bartlett M. Face Image Analysis by Unsupervised Learning. Kluwer Academic Publishers, 2001.
    [7]
    Cohen I, Sebe N, Garg A, Chen L S, Huang T S. Facial expression recognition from video sequence. In Proc. Int. Conf. Multimedia and Exp (ICME), Lausanne, Switzerland, Aug. 26-29, 2002, pp.121-124.
    [8]
    Yang P, Liu Q, Metaxas D N. Boosting coded dynamic features for facial action units and facial expression recognition. In Proc. CVPR, Minneapolis, USA, June 18-23, 2007, pp.511-518.
    [9]
    Zhao G, Pietikainen M. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans. Pattern Analysis and Machine Intelligence, 2007, 29(6): 915-928.
    [10]
    Ekman P. Universal and cultural difference in facial expressions of emotions. Nebraska Symposium on Motivation, 1971, Cole J K (ed.), Lincoln: University of Nebraska Press, 1972, 19: 207-283.
    [11]
    Scimmack U. Response latencies of pleasure and displeasure ratings: Further evidence for mixed feeling. Cognition and Emotion, 2005, 19(5): 671-691.
    [12]
    Lang P J. The emotion probe: Studies of motivation and attention. American Psychologist, 1995, 50(5): 372-385.
    [13]
    Russell J A. Evidence of convergent validity on the dimension of affect. J. Personality and Social Psychology, 1978, 30(38): 1152-1168.
    [14]
    Peter C, Herbon A. Emotion representation and physiology assignments in digital systems. Interacting with Computers, 2006, 18(2): 139-170.
    [15]
    Donato G, Bartlett M, Hager J, Ekman P, Sejnowski T. Classifying facial actions. IEEE PAMI, 1999, 21(10): 974-989.
    [16]
    Schmidt K, Cohn J. Dynamics of facial expression: Normative characteristics and individual difference. In Proc. Int. Conf. Multimedia and Expo, Tokyo, Japan, Aug. 22-25, 2001, pp.547-550.
    [17]
    Tong Y, Wang Y, Zhu Z, Ji Q. Robust facial feature tracking under varying face pose and facial expression. Pattern Recognition, 2007, 40(11): 3195-3208.
    [18]
    Kong H, Wang L, Teoh E K, Li X. Generalized 2D principal component analysis for face image representation and recognition. Neural Network, 2005, 18(5/6): 585-594.
    [19]
    Penev P S. Local feature analysis: A statistical theory for information representation and transmission
    [Ph.D. Dissertation]. Rockefeller University, 1998.
    [20]
    Seung H S, Lee D D. The manifold ways of perception. Science, 2000, 290(12): 2268-2269.
    [21]
    Roweis S T, Saul L K. Nonlinear dimensionality reduction by locally linear embedding. Science, 2000, 290(5500): 2323- 2326.
    [22]
    Carrol J M, Russell J A. Do facial expressions signal specific emotions? Judging emotion from the face in context. Journal of Personality and Social Psychology, 1996, 70(2): 205-218.
    [23]
    Russell J A. Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychological Bulletin, 1994, 115(1): 112-141.
    [24]
    Russell J A. Culture and categorization of emotion. Psychological Bulletin, 1991, 110: 426-450.
    [25]
    Osgood C E, MayWH, Miron M S. Cross-Cultural Universals of Affective Meaning. Urbana: Univ. Illinois Press., 1975.
    [26]
    Russell J A, Lewicka M, Nitt T. A cross-cultural study of a circumplex model of affect. Journal of Personality and Social Psychology, 1989, 57(5): 848-856.
    [27]
    Russell J A, Ridgeway D. Dimension underlying children's emotion concepts. Developmental Psychology, 1983, 19: 795- 804.
    [28]
    Kim Y A, Kim J K, Park S K, Oh K J, Chung C S. The study of dimension of internal states through word analysis about emotion. Korean Journal of the Science of Emotion and Sensibility, 1998, 1(1): 145-152.
    [29]
    Kim J K, Mun H S, Oh K J. Stability of two-dimension structure of emotion. Korean Journal of the Science of Emotion and Sensibility, 1999, 2(1): 43-52.
    [30]
    Bahn S B, Han J H, Chung C S. Facial expression database for mapping facial expression onto internal state. In Proc. Emotion Conf. Korea, Seoul, Korea, Nov. 1997, pp.215-219.
  • Related Articles

    [1]Andrea Caroppo, Alessandro Leone, Pietro Siciliano. Comparison Between Deep Learning Models and Traditional Machine Learning Approaches for Facial Expression Recognition in Ageing Adults[J]. Journal of Computer Science and Technology, 2020, 35(5): 1127-1146. DOI: 10.1007/s11390-020-9665-4
    [2]Tong Lin, Yao Liu, Bo Wang, Li-Wei Wang, Hong-Bin Zha. Nonlinear Dimensionality Reduction by Local Orthogonality Preserving Alignment[J]. Journal of Computer Science and Technology, 2016, 31(3): 512-524. DOI: 10.1007/s11390-016-1644-4
    [3]Zhi Han, De-Yu Meng, Zong-Ben Xu, Nan-Nan Gu. Incremental Alignment Manifold Learning[J]. Journal of Computer Science and Technology, 2011, 26(1): 153-165. DOI: 10.1007/s11390-011-1118-7
    [4]Gloria Rendon, Mao-Feng Ger, Ruth Kantorovitz, Shreedhar Natarajan, Jeffrey Tilson, Eric Jakobsson. Understanding the "Horizontal Dimension'' of Molecular Evolution to Annotate, Classify, and Discover Proteins with Functional Domains[J]. Journal of Computer Science and Technology, 2010, 25(1): 82-94.
    [5]Vaishali P. Sadaphal, Bijendra N. Jain. Random and Periodic Sleep Schedules for Target Detection in Sensor Networks[J]. Journal of Computer Science and Technology, 2008, 23(3): 343-354.
    [6]GAO Suixiang, LIN Guohui. Decision Tree Complexity of Graph Properties with Dimension at Most5[J]. Journal of Computer Science and Technology, 2000, 15(5): 416-422.
    [7]GAO Suixiang, LIN Guohui. Decision Tree Complexity of Graph Properties with Dimension at Most 5[J]. Journal of Computer Science and Technology, 2000, 15(5).
    [8]Ma Jiyong, Gao Wen. The Supervised Learning Gaussian Mixture Model[J]. Journal of Computer Science and Technology, 1998, 13(5): 471-474.
    [9]Shuai Dianxun. High-Order Two-Dimension Cluster Competitive Activation Mechanisms Used for Performing Symbolic Logic Algorithms of Problem Solving[J]. Journal of Computer Science and Technology, 1995, 10(2): 124-133.
    [10]Zhang Bo, Zhang Tian, Zhang Jianwei, Zhang Ling. Motion Planning for Robots with Topological Dimension Reduction Method[J]. Journal of Computer Science and Technology, 1990, 5(1): 1-16.

Catalog

    Article views (25) PDF downloads (2544) Cited by()
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return