Discussion-skill Analytics with Acoustic, Linguistic and Psychophysiological Data

Katashi Nagao, Kosuke Okamoto, Shimeng Peng, Shigeki Ohira


In this paper, we propose a system for improving the discussion skills of participants in a meeting by automatically evaluating statements in the meeting and effectively feeding back the results of the evaluation to them. To evaluate the skills automatically, the system uses both acoustic features and linguistic features of statements. It evaluates the way a person speaks, such as their “voice size,” on the basis of the acoustic features, and it also evaluates the contents of a statement, such as the “consistency of context,” on the basis of linguistic features. These features can be obtained from meeting minutes. Since it is difficult to evaluate the semantic contents of statements such as the “consistency of context,” we build a machine learning model that uses the features of minutes such as speaker attributes and the relationship of statements. In addition, we argue that participants’ heart rate (HR) data can be used to effectively evaluate their cognitive performance, specifically the performance in a discussion that consists of several Q&A segments (question-and-answer pairs). We collect HR data during a discussion in real time and generate machine-learning models for evaluation. We confirmed that the proposed method is effective for evaluating the discussion skills of meeting participants.


Paper Citation