The new trends in human-computer interfaces, which have evolved from conventional mouse and keyboard to automatic speech recognition systems and special interfaces designed for disabled people, do not take complete advantage of these valuable communicative abilities, resulting often in a less than natural interaction.
Facial expressions are privileged relative to other nonverbal “channels” of communication, such as vocal inflections and body movements. Facial expressions appear to be the most subject to conscious control. The ability to recognize emotion from facial expressions appears at least partially inborn. Newborns prefer to look at faces rather than other complex stimuli, and thus may be programmed to focus on information in faces. Until now the most widely used speech cues for audio emotion recognition are global-level prosodic features such as the statistics of the pitch and the intensity.
As Lisa Feldman Barrett, Peter Salovey, John D. Mayer pint out, several aspects should be taken into account in facial recognition: such as: gender, socioesconomic status, personality, age… That all make facial emotion recognicion so difficult.
Resources:
- Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information by Carlos Busso, Zhigang Deng, Serdar Yildirim, Murtaza Bulut, Chul Min Lee, Abe Kazemzadeh, Sungbok Lee, Ulrich Neumann, Shrikanth Narayanan. In .doc stock. Retrieved Wednesday 25th March 2009, 12:15. From: http://www.docstoc.com/docs/2373058/Analysis-of-Emotion-Recognition-using-Facial-Expressions-Speech
- The Wisdom in Feeling (2002) by Lisa Feldman Barrett, Peter Salovey, John D. Mayer. In Google Books. Retreievd 28 MArch 2009, 17:10. From http://books.google.es/books?id=qoUddTKwZ6IC&printsec=frontcover&dq=The+Wisdom+in+Feeling More information about the book: http://books.google.es/books?id=qoUddTKwZ6IC&dq=The+Wisdom+in+Feeling&source=gbs_summary_s&cad=0