TY - JOUR
T1 - Dynamics of Mental Models
T2 - Objective Vs. Subjective User Understanding of a Robot in the Wild
AU - Gebelli, Ferran
AU - Garell, Anais
AU - Lemaignan, Severin
AU - Ros, Raquel
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2025
Y1 - 2025
N2 - In Human-Robot Interaction research, assessing how humans understand the robots they interact with is crucial, particularly when studying the impact of explainability and transparency. Some studies evaluate objective understanding by analysing the accuracy of users' mental models, while others rely on perceived, self-reported levels of subjective understanding. We hypothesise that both dimensions of understanding may diverge, thus being complementary methods to assess the effects of explainability on users. In our study, we track the weekly progression of the users' understanding of an autonomous robot operating in a healthcare centre over five weeks. Our results reveal a notable mismatch between objective and subjective understanding. In areas where participants lacked sufficient information, the perception of understanding, i.e. subjective understanding, raised with increased contact with the system while their actual understanding, objective understanding, did not. We attribute these results to inaccurate mental models that persist due to limited feedback from the system. Future research should clarify how both objective and subjective dimensions of understanding can be influenced by explainability measures, and how these two dimensions of understanding affect other desiderata such as trust or usability.
AB - In Human-Robot Interaction research, assessing how humans understand the robots they interact with is crucial, particularly when studying the impact of explainability and transparency. Some studies evaluate objective understanding by analysing the accuracy of users' mental models, while others rely on perceived, self-reported levels of subjective understanding. We hypothesise that both dimensions of understanding may diverge, thus being complementary methods to assess the effects of explainability on users. In our study, we track the weekly progression of the users' understanding of an autonomous robot operating in a healthcare centre over five weeks. Our results reveal a notable mismatch between objective and subjective understanding. In areas where participants lacked sufficient information, the perception of understanding, i.e. subjective understanding, raised with increased contact with the system while their actual understanding, objective understanding, did not. We attribute these results to inaccurate mental models that persist due to limited feedback from the system. Future research should clarify how both objective and subjective dimensions of understanding can be influenced by explainability measures, and how these two dimensions of understanding affect other desiderata such as trust or usability.
KW - Human-Centered Robotics
KW - Long term Interaction
KW - Social HRI
UR - http://www.scopus.com/inward/record.url?scp=105008201800&partnerID=8YFLogxK
U2 - 10.1109/LRA.2025.3579217
DO - 10.1109/LRA.2025.3579217
M3 - Article
AN - SCOPUS:105008201800
SN - 2377-3766
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
ER -