Research ethics for AI in healthcare: how, when and who

Francesc Pifarre-Esquerda*, Montse Esquerda*, Francesc Garcia-Cuyas*

*Autor corresponent d’aquest treball

Producció científica: Article en revista indexadaArticleAvaluat per experts

Resum

Artificial intelligence (AI) and machine learning (ML) are transforming healthcare, offering promising tools for diagnostics, predictive modeling, and personalized treatment. However, the successful deployment of AI in clinical settings faces significant challenges, including ethical concerns and the “AI-chasm”—the gap between AI’s technical performance in controlled environments and its real-world deployment. Building on existing ethical frameworks, we propose a three-phase validation research model for AI in healthcare in which each phase identifies specific ethical risks and outlines the role of interdisciplinary oversight bodies responsible for mitigating them. We argue that AI models should not only demonstrate technical accuracy, but must also be integrated into healthcare systems in a manner that respects fundamental ethical principles. By embedding ethical oversight throughout the research and validation process, this framework seeks to close the AI-chasm and promote the responsible adoption of AI in healthcare.

Idioma originalAnglès
RevistaAI and Society
DOIs
Estat de la publicacióAcceptada/en premsa - 2025

Fingerprint

Navegar pels temes de recerca de 'Research ethics for AI in healthcare: how, when and who'. Junts formen un fingerprint únic.

Com citar-ho