A MULTI-MODAL VIRTUAL ENVIRONMENT TO TRAIN FOR JOB INTERVIEW
Hamza Hamdi, Paul Richard, Aymeric Suteau, Mehdi Saleh
2011
Abstract
This paper presents a multi-modal interactive virtual environment (VE) to train for job interview. The proposed platform aims to train candidates (students, job hunters, etc.) to better master their emotional state and behavioral skills. The candidates will interact with a virtual recruiter represented by an Embodied Conversational Agent (ECA). Both emotional and behavior states will be assessed using human-machine interfaces and biofeedback sensors. Contextual questions will be asked by the ECA to measure the technical skills of the candidates. Collected data will be processed in real-time by a behavioral engine to allow a realistic multi-modal dialogue between the ECA and the candidate. This work represents a socio-technological rupture opening the way to new possibilities in different areas such as professional or medical applications.
References
- Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C., Kazemzadeh, A., Lee, S., Neumann, U., and Narayanan, S. (2004). Analysis of emotion recognition using facial expressions, speech and multimodal information. In 6th international conference on Multimodal interfaces, pages 205-211, New York, NY, USA. ACM.
- Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C., Kazemzadeh, A., Lee, S., Neumann, U., and Narayanan, S. (2004). Analysis of emotion recognition using facial expressions, speech and multimodal information. In 6th international conference on Multimodal interfaces, pages 205-211, New York, NY, USA. ACM.
- Calvo, R. A. and D'Mello, S. (2010). Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transaction on Affective Computing, 1:18-37.
- Calvo, R. A. and D'Mello, S. (2010). Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transaction on Affective Computing, 1:18-37.
- Damasio, A. (1994). L'Erreur de Descartes. La raison des motions. Odile Jacob.
- Damasio, A. (1994). L'Erreur de Descartes. La raison des motions. Odile Jacob.
- Darwin, C. (1872). The expression of emotion in man and animal. University of Chicago Press (reprinted in 1965), Chicago.
- Darwin, C. (1872). The expression of emotion in man and animal. University of Chicago Press (reprinted in 1965), Chicago.
- Ekman, P. (1999). Basic emotions, pages 301-320. Sussex U.K.: John Wiley and Sons, Ltd, New York.
- Ekman, P. (1999). Basic emotions, pages 301-320. Sussex U.K.: John Wiley and Sons, Ltd, New York.
- Ekman, P. and Friesen, W. V. (1978). Facial Action Coding System: A Technique for Measurement of Facial Movement. Consulting Psychologists Press Palo Alto, California.
- Ekman, P. and Friesen, W. V. (1978). Facial Action Coding System: A Technique for Measurement of Facial Movement. Consulting Psychologists Press Palo Alto, California.
- Hammal, Z. and Massot, C. (2010). Holistic and featurebased information towards dynamic multi-expressions recognition. In VISAPP 2010. International Conference on Computer Vision Theory and Applications, volume 2, pages 300-309.
- Hammal, Z. and Massot, C. (2010). Holistic and featurebased information towards dynamic multi-expressions recognition. In VISAPP 2010. International Conference on Computer Vision Theory and Applications, volume 2, pages 300-309.
- Healey, J. and Picard, R. W. (2000). Smartcar: Detecting driver stress. In In Proceedings of ICPR'00, pages 218-221, Barcelona, Spain.
- Healey, J. and Picard, R. W. (2000). Smartcar: Detecting driver stress. In In Proceedings of ICPR'00, pages 218-221, Barcelona, Spain.
- Helmut, P., Junichiro, M., and Mitsuru, I. (2005). Recognizing, modeling, and responding to users' affective states. In User Modeling, pages 60-69.
- Helmut, P., Junichiro, M., and Mitsuru, I. (2005). Recognizing, modeling, and responding to users' affective states. In User Modeling, pages 60-69.
- Lisetti, C. and Nasoz, F. (2004). Using noninvasive wearable computers to recognize human emotions from physiological signals. EURASIP J. Appl. Signal Process, 2004:1672-1687.
- Lisetti, C. and Nasoz, F. (2004). Using noninvasive wearable computers to recognize human emotions from physiological signals. EURASIP J. Appl. Signal Process, 2004:1672-1687.
- Luneski, A. and Bamidis, P. D. (2007). Towards an emotion specification method: Representing emotional physiological signals. Computer-Based Medical Systems, IEEE Symposium on, 0:363-370.
- Luneski, A. and Bamidis, P. D. (2007). Towards an emotion specification method: Representing emotional physiological signals. Computer-Based Medical Systems, IEEE Symposium on, 0:363-370.
- Mehrabian, A. (1996). Pleasure-Arousal-Dominance: A General Framework for Describing and Measuring Individual Differences in Temperament. Current Psychology, 14(4):261-292.
- Mehrabian, A. (1996). Pleasure-Arousal-Dominance: A General Framework for Describing and Measuring Individual Differences in Temperament. Current Psychology, 14(4):261-292.
- Paleari, M. and Lisetti, C. L. (2006). Toward multimodal fusion of affective cues. In Proceedings of the 1st ACM international workshop on Human-Centered Multimedia, pages 99-108, New York, NY, USA. ACM.
- Paleari, M. and Lisetti, C. L. (2006). Toward multimodal fusion of affective cues. In Proceedings of the 1st ACM international workshop on Human-Centered Multimedia, pages 99-108, New York, NY, USA. ACM.
- Pantic, M. and Rothkrantz, L. (2003). Toward an affect-sensitive multimodal human-computer interaction. volume 91, pages 1370-1390. Proceedings of the IEEE.
- Pantic, M. and Rothkrantz, L. (2003). Toward an affect-sensitive multimodal human-computer interaction. volume 91, pages 1370-1390. Proceedings of the IEEE.
- Picard, R. (1995). Affective Computing, rapport interne du MIT Media Lab, TR321. Massachusetts Institute of Technology, Cambridge, USA.
- Picard, R. (1995). Affective Computing, rapport interne du MIT Media Lab, TR321. Massachusetts Institute of Technology, Cambridge, USA.
- Picard, R., Vyzas, E., and Healey, J. (2001). Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(10):1175- 1191.
- Picard, R., Vyzas, E., and Healey, J. (2001). Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(10):1175- 1191.
- Roy, D. and Pentland, A. (1996). Automatic spoken affect classification and analysis. automatic face and gesture recognition. In Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG 7896), pages 363-367, Washington, DC, USA.
- Roy, D. and Pentland, A. (1996). Automatic spoken affect classification and analysis. automatic face and gesture recognition. In Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG 7896), pages 363-367, Washington, DC, USA.
- Scherer, K. R. (2000). Emotion. in Introduction to Social Psychology: A European perspective, pages 151-191. Blackwell, Oxford.
- Scherer, K. R. (2000). Emotion. in Introduction to Social Psychology: A European perspective, pages 151-191. Blackwell, Oxford.
- Scherer, K. R. (2003). Vocal communication of emotion: A review of research paradigms. Speech Communication, 40(7-8):227-256.
- Scherer, K. R. (2003). Vocal communication of emotion: A review of research paradigms. Speech Communication, 40(7-8):227-256.
- Sebe, N., Cohen, I., and Huang, T. (2005). Multimodal Emotion Recognition. World Scientific.
- Sebe, N., Cohen, I., and Huang, T. (2005). Multimodal Emotion Recognition. World Scientific.
- Sharma, R., Pavlovic, V. I., and Huang, T. S. (1998). Toward multimodal human-computer interface. roceedings of the IEEE, 86(5):853-869.
- Sharma, R., Pavlovic, V. I., and Huang, T. S. (1998). Toward multimodal human-computer interface. roceedings of the IEEE, 86(5):853-869.
- Tian, Y., Kanade, T., and Cohn, J. (2000). Recognizing lower face action units for facial expression analysis. pages 484-490. Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition (FG'00).
- Tian, Y., Kanade, T., and Cohn, J. (2000). Recognizing lower face action units for facial expression analysis. pages 484-490. Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition (FG'00).
- Vilhjalmsson, H., Cantelmo, N., Cassell, J., Chafai, N. E., Kipp, M., Kopp, S., Mancini, M., Marsella, S., Marshall, A. N., Pelachaud, C., Ruttkay, Z., Thorisson, K. R., van, H. W., and van der, R. J. W. (2007). The behavior markup language: Recent developments and challenges. In Intelligent Virtual Agents, pages 99- 111, Berlin. Springer.
- Vilhjalmsson, H., Cantelmo, N., Cassell, J., Chafai, N. E., Kipp, M., Kopp, S., Mancini, M., Marsella, S., Marshall, A. N., Pelachaud, C., Ruttkay, Z., Thorisson, K. R., van, H. W., and van der, R. J. W. (2007). The behavior markup language: Recent developments and challenges. In Intelligent Virtual Agents, pages 99- 111, Berlin. Springer.
- Villon, O. (2007). Modeling affective evaluation of multimedia contents: user models to associate subjective experience, physiological expression and contents description. PhD thesis, Thesis.
- Villon, O. (2007). Modeling affective evaluation of multimedia contents: user models to associate subjective experience, physiological expression and contents description. PhD thesis, Thesis.
- Wang, H., Azuaje, F., Jung, B., and Black, N. (2003). A markup language for electrocardiogram data acquisition and analysis (ecgml). BMC Medical Informatics and Decision Making, 3(1):4.
- Wang, H., Azuaje, F., Jung, B., and Black, N. (2003). A markup language for electrocardiogram data acquisition and analysis (ecgml). BMC Medical Informatics and Decision Making, 3(1):4.
Paper Citation
in Harvard Style
Hamdi H., Richard P., Suteau A. and Saleh M. (2011). A MULTI-MODAL VIRTUAL ENVIRONMENT TO TRAIN FOR JOB INTERVIEW . In Proceedings of the 1st International Conference on Pervasive and Embedded Computing and Communication Systems - Volume 1: SIMIE, (PECCS 2011) ISBN 978-989-8425-48-5, pages 551-556. DOI: 10.5220/0003401805510556
in Harvard Style
Hamdi H., Richard P., Suteau A. and Saleh M. (2011). A MULTI-MODAL VIRTUAL ENVIRONMENT TO TRAIN FOR JOB INTERVIEW . In Proceedings of the 1st International Conference on Pervasive and Embedded Computing and Communication Systems - Volume 1: SIMIE, (PECCS 2011) ISBN 978-989-8425-48-5, pages 551-556. DOI: 10.5220/0003401805510556
in Bibtex Style
@conference{simie11,
author={Hamza Hamdi and Paul Richard and Aymeric Suteau and Mehdi Saleh},
title={A MULTI-MODAL VIRTUAL ENVIRONMENT TO TRAIN FOR JOB INTERVIEW},
booktitle={Proceedings of the 1st International Conference on Pervasive and Embedded Computing and Communication Systems - Volume 1: SIMIE, (PECCS 2011)},
year={2011},
pages={551-556},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0003401805510556},
isbn={978-989-8425-48-5},
}
in Bibtex Style
@conference{simie11,
author={Hamza Hamdi and Paul Richard and Aymeric Suteau and Mehdi Saleh},
title={A MULTI-MODAL VIRTUAL ENVIRONMENT TO TRAIN FOR JOB INTERVIEW},
booktitle={Proceedings of the 1st International Conference on Pervasive and Embedded Computing and Communication Systems - Volume 1: SIMIE, (PECCS 2011)},
year={2011},
pages={551-556},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0003401805510556},
isbn={978-989-8425-48-5},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 1st International Conference on Pervasive and Embedded Computing and Communication Systems - Volume 1: SIMIE, (PECCS 2011)
TI - A MULTI-MODAL VIRTUAL ENVIRONMENT TO TRAIN FOR JOB INTERVIEW
SN - 978-989-8425-48-5
AU - Hamdi H.
AU - Richard P.
AU - Suteau A.
AU - Saleh M.
PY - 2011
SP - 551
EP - 556
DO - 10.5220/0003401805510556
in EndNote Style
TY - CONF
JO - Proceedings of the 1st International Conference on Pervasive and Embedded Computing and Communication Systems - Volume 1: SIMIE, (PECCS 2011)
TI - A MULTI-MODAL VIRTUAL ENVIRONMENT TO TRAIN FOR JOB INTERVIEW
SN - 978-989-8425-48-5
AU - Hamdi H.
AU - Richard P.
AU - Suteau A.
AU - Saleh M.
PY - 2011
SP - 551
EP - 556
DO - 10.5220/0003401805510556