Authors:
Mehdi Sobhani
1
;
Jim Smith
2
;
Anthony Pipe
3
and
Angelika Peer
4
Affiliations:
1
Department of Engineering Mathematics, University of Bristol, Bristol, U.K.
;
2
Department of Computer Science and Creative Technologies, University of the West of England, Bristol, U.K.
;
3
Bristol Robotics Laboratory, University of the West of England, Bristol, U.K.
;
4
Faculty of Engineering, Free University of Bozen-Bolzano, Bozen-Bolzano, Italy
Keyword(s):
Decision-Making, Joint Action, Human-Robot Interaction, Internal Simulation.
Abstract:
In this paper, we aim to demonstrate the potential for wider-ranging capabilities and ease of transferability of our recently developed decision-making architecture for human-robot collaboration. To this end, a somewhat related but different application-specific example from the generic one used in its development is chosen, a toy car assembling task in which a participant works together with a robot to perform the assembly task. In a “Wizard of Oz” fashion, a comparison is made between the participant’s reactions to working with the robot being controlled either by our architecture or by a human “Wizard” who is hidden from view. With regard to the generalisability of the architecture, we also wish to investigate whether specific models trained on the observed human behaviour in a generic assembly task also transfer to this more complex task. Therefore, pre-trained interaction models from a prior generic pick-and-place task are used again in this new application without any re-traini
ng. The architecture was implemented on a robotic arm. Participants worked with the robotic arm to perform the task of picking toy car parts one by one and assembling the car while collaborating with the robot. Each participant repeated the task 3 times for each condition, Model or Wizard, in a random order. At the end of each trial participants completed a PeRDITA questionnaire. First, a test to rule out significant differences was performed, which yielded no significant results for any of the subjective and objective measures. As not having a significant difference does not necessarily mean similarity of conditions, to check for similarity, a Bayesian comparison of the conditions was performed next, which indicated a high probability of similarity between the model and Wizard performance. The high similarity to human-like performance observed for this more complex task supports the claim for the transferability of the models trained on a more generic task.
(More)