TY - GEN
T1 - Safe Robot Navigation in Indoor Healthcare Workspaces
AU - Vourkos, Eleftherios G.
AU - Toulkeridou, Evropi
AU - Kourris, Antreas
AU - Ros, Raquel Julia
AU - Christoforou, Eftychios G.
AU - Ramdani, Nacim
AU - Panayides, Andreas S.
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.
PY - 2023
Y1 - 2023
N2 - Healthcare workspaces would greatly benefit from the employment of robotic assistants in both clinical and non-clinical tasks. However, despite their advantages, a major shortcoming for the deployment of robots limiting their widespread acceptance by the market is the fact that existing robotic solutions were originally designed for large industrial and warehouse spaces. These are characterized by structured spaces and predictable environments, where robots move along predefined paths and interaction with humans is typically not required. Herein, we examine state-of-the-art computer vision methods that enable robots to detect the presence and identify the type of dynamic obstacles inside their visual field and adapt their navigation accordingly. To achieve this goal, we trained our robots using contemporary deep learning methods (namely YOLO-You Only Look Once architecture and its variations) and obtained promising results in both human and robot detection. For that purpose, a newly constructed dataset consisting of robot images was used, complementing the well-known COCO dataset. Overall, the present study contributes towards the key objective of safe robot navigation in healthcare spaces and underpins the wider application of studies on Human-Robot Interaction in less structured environments.
AB - Healthcare workspaces would greatly benefit from the employment of robotic assistants in both clinical and non-clinical tasks. However, despite their advantages, a major shortcoming for the deployment of robots limiting their widespread acceptance by the market is the fact that existing robotic solutions were originally designed for large industrial and warehouse spaces. These are characterized by structured spaces and predictable environments, where robots move along predefined paths and interaction with humans is typically not required. Herein, we examine state-of-the-art computer vision methods that enable robots to detect the presence and identify the type of dynamic obstacles inside their visual field and adapt their navigation accordingly. To achieve this goal, we trained our robots using contemporary deep learning methods (namely YOLO-You Only Look Once architecture and its variations) and obtained promising results in both human and robot detection. For that purpose, a newly constructed dataset consisting of robot images was used, complementing the well-known COCO dataset. Overall, the present study contributes towards the key objective of safe robot navigation in healthcare spaces and underpins the wider application of studies on Human-Robot Interaction in less structured environments.
KW - Convolutional Neural Networks
KW - Healthcare spaces
KW - Human-Robot Interaction
KW - Robot navigation
KW - YOLO
UR - http://www.scopus.com/inward/record.url?scp=85174449170&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-44237-7_6
DO - 10.1007/978-3-031-44237-7_6
M3 - Conference contribution
AN - SCOPUS:85174449170
SN - 9783031442360
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 56
EP - 64
BT - Computer Analysis of Images and Patterns - 20th International Conference, CAIP 2023, Proceedings
A2 - Tsapatsoulis, Nicolas
A2 - Kyriacou, Efthyvoulos
A2 - Lanitis, Andreas
A2 - Theodosiou, Zenonas
A2 - Pattichis, Marios
A2 - Pattichis, Constantinos
A2 - Kyrkou, Christos
A2 - Panayides, Andreas
PB - Springer Science and Business Media Deutschland GmbH
T2 - 20th International Conference on Computer Analysis of Images and Patterns, CAIP 2023
Y2 - 25 September 2023 through 28 September 2023
ER -