loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Alisa Barkar 1 ; Mathieu Chollet 2 ; 3 ; Matthieu Labeau 1 ; Beatrice Biancardi 4 and Chloe Clavel 5

Affiliations: 1 LTCI, Institut Polytechnique de Paris, Telecom-Paris, 19 Place Marguerite Perey, 91120 Palaiseau, France ; 2 School of Computing Science, University of Glasgow, G12 8RZ Glasgow, U.K. ; 3 IMT Atlantique, LS2N, UMR CNRS 6004, 44307 Nantes, France ; 4 CESI LINEACT, Nanterre, France ; 5 ALMAnaCH, INRIA, Paris, France

Keyword(s): Public Speaking Assessment, Large Language Models (LLMs), Persuasiveness Prediction, Interpretable Features, Textual Modality, Automatic Speech Evaluation, Open-Source Models.

Abstract: The increasing importance of public speaking (PS) skills has fueled the development of automated assessment systems, yet the integration of large language models (LLMs) in this domain remains underexplored. This study investigates the application of LLMs for assessing PS by predicting persuasiveness. We propose a novel framework where LLMs evaluate criteria derived from educational literature and feedback from PS coaches, offering new interpretable textual features. We demonstrate that persuasiveness predictions of a regression model with the new features achieve a Root Mean Squared Error (RMSE) of 0.6, underperforming approach with hand-crafted lexical features (RMSE 0.51) and outperforming direct zero-shot LLM persuasiveness predictions (RMSE of 0.8). Furthermore, we find that only LLM-evaluated criteria of language level is predictable from lexical features (F1-score of 0.56), disapproving relations between these features. Based on our findings, we criticise the abilities of LLMs to analyze PS accurately. To ensure reproducibility and adaptability to emerging models, all source code and materials are publicly available on GitHub. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.139.64.39

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Barkar, A., Chollet, M., Labeau, M., Biancardi, B. and Clavel, C. (2025). Decoding Persuasiveness in Eloquence Competitions: An Investigation into the LLM’s Ability to Assess Public Speaking. In Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART; ISBN 978-989-758-737-5; ISSN 2184-433X, SciTePress, pages 538-546. DOI: 10.5220/0013158400003890

@conference{icaart25,
author={Alisa Barkar and Mathieu Chollet and Matthieu Labeau and Beatrice Biancardi and Chloe Clavel},
title={Decoding Persuasiveness in Eloquence Competitions: An Investigation into the LLM’s Ability to Assess Public Speaking},
booktitle={Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART},
year={2025},
pages={538-546},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013158400003890},
isbn={978-989-758-737-5},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART
TI - Decoding Persuasiveness in Eloquence Competitions: An Investigation into the LLM’s Ability to Assess Public Speaking
SN - 978-989-758-737-5
IS - 2184-433X
AU - Barkar, A.
AU - Chollet, M.
AU - Labeau, M.
AU - Biancardi, B.
AU - Clavel, C.
PY - 2025
SP - 538
EP - 546
DO - 10.5220/0013158400003890
PB - SciTePress