Authors:
Luca Strano
1
;
Claudia Bonanno
1
;
Francesco Ragusa
2
;
1
;
Giovanni Farinella
3
;
2
;
1
and
Antonino Furnari
1
;
2
Affiliations:
1
FPV@IPLAB, DMI - University of Catania, Italy
;
2
Next Vision s.r.l. - Spinoff of the University of Catania, Italy
;
3
Cognitive Robotics and Social Sensing Laboratory, ICAR-CNR, Palermo, Italy
Keyword(s):
Virtual Assistants, Visual Question Answering, Large Language Models.
Abstract:
We introduce HERO-GPT, a Multi-Modal Virtual Assistant built on a Multi-Agent System designed to swiftly adapt to any procedural context minimizing the need for training on context-specific data. In contrast to traditional approaches to conversational agents, HERO-GPT utilizes a series of dynamically interchangeable documents instead of datasets, hand-written rules, or conversational examples, to provide information on the given scenario. This paper presents the system’s capability to adapt to an industrial domain scenario through the integration of a GPT-based Large Language Model and an object detector to support Visual Question Answering. HERO-GPT is capable of offering conversational guidance on various aspects of industrial contexts, including information on Personal Protective Equipment (PPE), machinery, procedures, and best practices. Experiments performed in an industrial laboratory with real users demonstrate HERO-GPT’s effectiveness. Results indicate that users clearly pref
er the proposed virtual assistant over traditional supporting materials such as paper-based manuals in the considered scenario. Moreover, the performance of the proposed system are shown to be comparable or superior to those of traditional approaches, while requiring little domain-specific data for the setup of the system.
(More)