TY - GEN
T1 - What are you Looking at?
T2 - 16th International Conference on Social Robotics, ICSR + AI 2024
AU - Rodríguez-Rubio, Carla Zou Yin
AU - Caro-Via, Selene
AU - Yebra-Berenguer, Gemma
AU - González Alzate, Alejandro
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
PY - 2025
Y1 - 2025
N2 - This paper presents a novel human gaze prediction algorithm based on computer vision and algebraic techniques. The proposed method measures the relative position of the iris within the eye in an image and, by integrating this information with an existing 3D head pose estimation model, estimates the gaze direction. Experiments are conducted in a controlled scenario where different points are displayed on a screen (with known screen coordinates), and participants are instructed to sequentially look at them using different approaches while images capturing their faces are taken. By extracting the head and eye pose descriptors from these images and combining them with the known screen coordinates, a gaze prediction model is trained. After the validation process, as a result, the introduction of the new eye pose descriptors in the model increases its accuracy and precision by 10% and 28%, respectively, making the system more robust. This research is part of the Spanish-funded project “DivInTech” and its future aim is to adapt the developed model for children with autism (ASD). Such model will subsequently be used in this project to evaluate the interaction between the study subjects (children with ASD) and the humanoid robot NAO during different educational activities.
AB - This paper presents a novel human gaze prediction algorithm based on computer vision and algebraic techniques. The proposed method measures the relative position of the iris within the eye in an image and, by integrating this information with an existing 3D head pose estimation model, estimates the gaze direction. Experiments are conducted in a controlled scenario where different points are displayed on a screen (with known screen coordinates), and participants are instructed to sequentially look at them using different approaches while images capturing their faces are taken. By extracting the head and eye pose descriptors from these images and combining them with the known screen coordinates, a gaze prediction model is trained. After the validation process, as a result, the introduction of the new eye pose descriptors in the model increases its accuracy and precision by 10% and 28%, respectively, making the system more robust. This research is part of the Spanish-funded project “DivInTech” and its future aim is to adapt the developed model for children with autism (ASD). Such model will subsequently be used in this project to evaluate the interaction between the study subjects (children with ASD) and the humanoid robot NAO during different educational activities.
KW - Autism Spectrum Disorders (ASD)
KW - Eye detection
KW - Gaze estimation
KW - Head-pose estimation
KW - Humanoid robot NAO
UR - http://www.scopus.com/inward/record.url?scp=105002115195&partnerID=8YFLogxK
U2 - 10.1007/978-981-96-3525-2_18
DO - 10.1007/978-981-96-3525-2_18
M3 - Conference contribution
AN - SCOPUS:105002115195
SN - 9789819635245
T3 - Lecture Notes in Computer Science
SP - 211
EP - 224
BT - Social Robotics - 16th International Conference, ICSR + AI 2024, Proceedings
A2 - Palinko, Oskar
A2 - Bodenhagen, Leon
A2 - Cabibihan, John-John
A2 - Fischer, Kerstin
A2 - Šabanović, Selma
A2 - Winkle, Katie
A2 - Behera, Laxmidhar
A2 - Ge, Shuzhi Sam
A2 - Chrysostomou, Dimitrios
A2 - Jiang, Wanyue
A2 - He, Hongsheng
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 23 October 2024 through 26 October 2024
ER -