Creation of Emotion-inducing Scenarios using BDI

Pierre Olivier Brosseau, Claude Frasson

Abstract

Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, existing methods to induce emotions are mostly limited to audio and visual stimulations. This study tested the induction of emotions in a virtual environment with scenarios that were designed using the Belief-Desire-Intention (BDI) model, well-known in the Agent community. The first objective of the study was to design the virtual environment and a set of scenarios happening in driving situations. These situations can generate various emotional conditions or reactions and the design was followed by a testing phase using an EEG headset able to assess the resulting emotions (frustration, boredom and excitement) of 30 participants to verify how accurate the predicted emotion could be induced. The study phase proved the reliability of the BDI model, with over 70% of our scenarios working as expected. Finally, we outline some of the possible uses of inducing emotions in a virtual environment for correcting negative emotions.

References

  1. Bartlett, M. S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., and J. Movellan, “Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition (CVPR 7805), pp. 568-573, 2005.
  2. Chaouachi, M., Jraidi, I., Frasson, C.: Modeling Mental Workload Using EEG Features for Intelligent Systems. User Modeling and User-Adapted Interaction, Girona, Spain, 50-61(2011).
  3. D'Mello, S., and Graesser, A., 2009. 'Automatic Detection of Learner's Affect From Gross Body Language', Applied Artificial Intelligence, 23, (2), pp. 123-150.
  4. Georgeff, M., Pell, B. Pollack, M., Tambe, M., Woolridge, M. The Belief-Desire-Intention Model of Agency, 1999.
  5. Jaekoo J., Perception and BDI Reasoning Based Agent Model for Human Behavior Simulation in Complex System, Human-Computer Interaction, Part V, HCII 2013, LNCS 8008, pp.62-71. 2013.
  6. Jones, C., Jonsson, I. M.: Automatic recognition of affective cues in the speech of car drivers to allow appropriate responses. In: Proc. OZCHI (2005).
  7. Neiberg, D., Elenius, K. and Laskowski, K. “Emotion Recognition in Spontaneous Speech Using GMM,” Proc. Int'l Conf. Spoken Language Processing (ICSLP 7806), pp. 809-812, 2006.
  8. Ortony, A., Clore, G., and Collins, A. (1988). The Cognitive Structure of Emotions. Cambridge University Press.
  9. O'Toole, A.J. et al., “A Video Database of Moving Faces and People,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 812-816, May 2005.
  10. Pantic, M., and Bartlett, M. S., “Machine Analysis of Facial Expressions,” Face Recognition, K. Delac and M. Grgic, eds., pp. 377-416, I-Tech Education and Publishing, 2007.
  11. Picard, R. W., Vyzas, E., and Healey, J., 2001. 'Toward machine emotional intelligence: analysis of affective physiological state', Pattern Analysis and Machine Intelligence, IEEE Transactions on, 23, (10), pp. 1175-1191.
  12. Puica, M-A, Florea, A-M. Emotional Belief-DesireIntention Agent Model: Previous Work and Proposed Architecture, (IJARAI) International Journal of Advanced Research in Artificial Intelligence, Vol. 2, No. 2, 2013.
  13. Rao, A. S., Georgeff, P, BDI Agents: From Theory to Practice, Proceedings of the First International Conference on Multiagent Systems, AAAI, 1995.
  14. Rao, A. S., Georgeff, P., BDI Agents: From Theory to Practice, Proceedings of the First International Conference on Multiagent Systems, AAAI, 1995.
  15. Schuller, B., Muller, R., B., Hornler, A., Hothker, H. Konosu, and Rigoll, G. “Audiovisual Recognition of Spontaneous Interest within Conversations,” Proc. Ninth ACM Int'l Conf. Multimodal Interfaces (ICMI 7807), pp. 30-37, 2007.
  16. Sebe, N., Lew, M. S., Cohen, I., Sun, Y., Gevers, T., and Huang, T.S. “Authentic Facial Expression Analysis,” Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition (AFGR), 2004.
  17. O'Toole, A. J. et al., “A Video Database of Moving Faces and People,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 812-816, May 2005.
  18. Zeng, Z., Pantic, M., Glenn I. R., and Huang, T. S. A survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions, Member, IEEE Computer Society, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 31, no.1, January 2009, pp. 39-57.
Download


Paper Citation


in Harvard Style

Brosseau P. and Frasson C. (2015). Creation of Emotion-inducing Scenarios using BDI . In Proceedings of the International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-758-074-1, pages 523-529. DOI: 10.5220/0005278505230529


in Bibtex Style

@conference{icaart15,
author={Pierre Olivier Brosseau and Claude Frasson},
title={Creation of Emotion-inducing Scenarios using BDI},
booktitle={Proceedings of the International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2015},
pages={523-529},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005278505230529},
isbn={978-989-758-074-1},
}


in EndNote Style

TY - CONF
JO - Proceedings of the International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - Creation of Emotion-inducing Scenarios using BDI
SN - 978-989-758-074-1
AU - Brosseau P.
AU - Frasson C.
PY - 2015
SP - 523
EP - 529
DO - 10.5220/0005278505230529