IJMTES – EMOTIONAL STATE RECOGNITION USING FACIAL AND ACOUSTIC FEATURES OF THE HUMAN

Author’s Name : G.Saranya, G.Mary Amirtha Sagayee, G.S.Anandha Mala

Volume 01 Issue o5  Year 2014  

ISSN no:  2348-3121 

Page no: 110-113

Abstract—Emotional state of the human can be detected and analyzed by various aspects such as facial expressions, voice tone and body gestures. The detected expressions are analyzed and interrupted by the system. This method is very useful in Human Computer Interaction. Deblocking is the main step in detecting the various expressions of the human. Expressions are classified using respective classifiers for detecting emotional state using multi modalities. In the proposed approach facial expressions and voice tone are used to detect the emotional state. The key point detection in the image is obtained using Independent Component Analysis (ICA). Principal Component Analysis (PCA) is used for train the images. Similarly for speech Mel-Frequency Cepstral Coefficient (MFCC) and Sub band based Cepstral parameter (SBC) are used for feature extraction from the voice tone. Decision level fusion is used to combine the facial expressions and the emotional speech.

Reference

[1] Ce Zhan, Wanqing Li, Philip Ogunbona, and Farzad., “Real-Time Facial Feature Point Extraction”, Safaei University of Wollongong, Wollongong, NSW 2522, Australia. Zhan, F. (2007). Pacific- Rim Conference on Multimedia (pp. 88-97). Germany: Springer.
[2] vSoroosh Mariooryad, Carlos Busso., Exploring Cross-Modality Affective Reactions for Audiovisual Emotion., IEEE Transactions On Affective Computing, Vol. 4, No. 2, April-June.
[3] Qiuxia wu, Zhiyong Wang, Feiqi Deng, Zheru Chi, David Dagan Feng, Realistic Human Action Recognition With Multimodal Feature Selection And Fusion. IEEE transactions on systems, man, and cybernetics: systems, VOL.43, NO, 4, July 2013.R. Nicole, “Title of paper with only first word capitalized,” J. Name Stand. Abbrev., in press.
[4] Bellakhdhar, Kais Loukil, Mohamed Svm Classification For Face Recognition, Faten Journal of intelligent computing volume 3 Number 4 December (2012).
[5] Hua Gu Guangda Su Cheng Du, Feature Points Extraction from Faces., Image and vision computing NZ.
[6] Priya Metri1, Jayshree Ghorpade and Ayesha Butalia,Facial Emotion Recognition Using Context Based Multimodal Approach”, Int. J. Emerg. Sci., 2(1), 171-182,March 2012 ISSN: 2222-4254 © IJES 171, Pune.
[7] Deepesh raj, A Real Time Face Recognition System Using PCA And Various Diatance Classifiers., Spring (2011).
[8] K.V.Krishna., Emotion Recognition In Speech Using MFCC And Wavelet Features.,., (2013) 3rd IEEE International Advance Computing Conference (IACC).
[9] Jui-Chen Wu, Yung-Sheng Chen, and ICheng Chang., An Automatic Approach to Facial Feature extraction for 3-D Face Modeling, , IAENG International Journal of Computer Science, 33:2, IJCS_33_2_1, 24 May(2007).

Full Pdf Paper-Click Here

Scroll Up