dialogue experiments with a negotiation dialogue
system using the deep learning-based parser.
Compared to traditional rule-based approaches,
the deep learning-based parser demonstrated impro-
ved accuracy in classifying utterances into their
correct dialogue acts while considerably reducing the
number of utterances classified as “unknown.”
Among the various pretrained models evaluated, Ro-
BERTa achieved the highest classification accuracy,
while ALBERT effectively minimized the decline in
accuracy while simultaneously reducing computati-
onal complexity and execution time. Moreover, in the
negotiation dialogue experiments, the system using
the deep learning-based parser exhibited enhanced
performance in terms of utility and fairness.
This paper primarily focused on enhancing the
parser component of the module framework using
dialogue acts. Consequently, the performance of the
dialogue act approach can be further optimized by
improving the remaining managers and generators. In
recent years, the LLM approach has emerged as the
dominant method for chatbots (Fu et al ., 2023) (Zhao
et al ., 2023). Therefore, we are currently exploring
the integration of LLMs as generators, incorporating
dialogue acts into the prompts (Wagner et al ., 2024).
Furthermore, we will perform dialogue experiments
comparing our proposed method with LLMs, aiming
to further demonstrate the value of the dialogue act
approach, which effectively captures the structural
outline of an utterance.
REFERENCES
Asher, N., Hunter, J., Morey, M., Benamara, F., &
Afantenos, S. (2016). Discourse Structure and Dialogue
Acts in Multiparty Dialogue: the STAC Corpus, In
Proceedings of the 10th International Conference on
Language Resources and Evaluation (LREC), pp.
2721-2727.
Cano-Basave, A. E., & He, Y. (2016). A study of the impact
of persuasive argumentation in political debates. In
Proceedings of the 2016 Conference of the North
American Chapter of the Association for
Computational Linguistics: Human Language
Technologies, pages pp.1405-1413.
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019).
BERT: Pre-training of Deep Bidirectional
Transformers for Language Understanding, In
Proceedings of the 2019 Conference of the North
American Chapter of the Association for
Computational Linguistics: Human Language
Technologies (NAACL-HLT), p.4171-4186.
Fisher, R., Ury, W. L., & Patton, B. (2011). Getting to yes:
Negotiating agreement without giving in. Penguin.
Fu, Y., Peng, H., Khot, T., & Lapata, M. (2023). Improving
language model negotiation with self-play and in-
context learning from ai feedback. arXiv:2305.10142.
He, H., Chen, D., Balakrishnan, A., & Liang, P. (2018).
Decoupling Strategy and Generation in Negotiation
Dialogues, In Proceedings of the 2018 Conference on
Empirical Methods in Natural Language Processing
(EMNLP), pp. 2333-2343.
Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P.,
& Soricut, R. (2020). ALBERT: A Lite Bert for Self-
supervised Learning of Language Representations, In
Proceedings of the International Conference on
Learning Representations (ICLR).
Lewicki, R. J., Saunders, D. M., & Minton, J. M. (2011).
Essentials of negotiation. McGraw-Hill/Irwin Boston,
MA, USA.
Lewis, M., Yarats, D., Dauphin, Y. N., Parikh, D., & Batra
D. (2017). Deal or No Deal? End-to-End Learning for
Negotiation Dialogues, In Proceedings of the 2017
Conference on Empirical Methods in Natural
Language Processing (EMNLP), pp. 2443-2453.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D.,
Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V.
(2019) .Roberta: A Robustly Optimized BERT
Pretraining Approach, arXiv:1907.11692.
Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019).
DistilBERT, a distilled version of BERT: smaller, faster,
cheaper and lighter, arXiv:1910.01108.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017).
Attention Is All You Need, In Proceedings of the 31st
International Conference on Neural Information
Processing Systems (NeurlPS), pp. 6000-6010.
Wagner, N., & Ultes, S. (2024). On the Controllability of
Large Language Models for Dialogue Interaction. In
Proceedings of the 25th Annual Meeting of the Special
Interest Group on Discourse and Dialogue, pp. 216-
221.
Young, S., Gašić, M., Thomson, B., & Williams, J. D.
(2013). POMDP-Based Statistical Spoken Dialog
Systems, In A review. In Proceedings of the IEEE,
Vol.101(5), pp. 1600-1179.
Żelasko, P., Pappagari, R., & Dehak, N. (2021). What Helps
Transformers Recognize Conversational Structure?
Importance of Context, Punctuation, and Labels in
Dialog Act Recognition, Transactions of the
Association for Computational Linguistics, Vol.9,
pp.1163-1179.
Zhan, H., Wang, Y., Feng, T., Hua, Y., Sharma, S., Li, Z.,
Qu, L., & Haffari, G. (2020). Let’s Negotiate! A Survey
of Negotiation Dialogue Systems. arXiv:2212.09072.
Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y.,
et al. (2023). A survey of large language models.
arXiv:2303.18223.