SAF: Stakeholders’ Agreement on Fairness in the Practice of Machine Learning Development

Georgina Curto, Flavio Comim

Research output: Indexed journal article Articlepeer-review

Abstract

This paper clarifies why bias cannot be completely mitigated in Machine Learning (ML) and proposes an end-to-end methodology to translate the ethical principle of justice and fairness into the practice of ML development as an ongoing agreement with stakeholders. The pro-ethical iterative process presented in the paper aims to challenge asymmetric power dynamics in the fairness decision making within ML design and support ML development teams to identify, mitigate and monitor bias at each step of ML systems development. The process also provides guidance on how to explain the always imperfect trade-offs in terms of bias to users.

Original languageEnglish
Article number29
Number of pages19
JournalScience and Engineering Ethics
Volume29
Issue number4
DOIs
Publication statusPublished - Aug 2023

Keywords

  • Artificial Intelligence
  • Bias
  • Discrimination
  • Fairness
  • Pro-Ethical Design
  • Trustworthy AI

Fingerprint

Dive into the research topics of 'SAF: Stakeholders’ Agreement on Fairness in the Practice of Machine Learning Development'. Together they form a unique fingerprint.

Cite this