Resum
Artificial intelligence (AI) and machine learning (ML) are transforming healthcare, offering promising tools for diagnostics, predictive modeling, and personalized treatment. However, the successful deployment of AI in clinical settings faces significant challenges, including ethical concerns and the “AI-chasm”—the gap between AI’s technical performance in controlled environments and its real-world deployment. Building on existing ethical frameworks, we propose a three-phase validation research model for AI in healthcare in which each phase identifies specific ethical risks and outlines the role of interdisciplinary oversight bodies responsible for mitigating them. We argue that AI models should not only demonstrate technical accuracy, but must also be integrated into healthcare systems in a manner that respects fundamental ethical principles. By embedding ethical oversight throughout the research and validation process, this framework seeks to close the AI-chasm and promote the responsible adoption of AI in healthcare.
Idioma original | Anglès |
---|---|
Revista | AI and Society |
DOIs | |
Estat de la publicació | Acceptada/en premsa - 2025 |