Risk mitigation in algorithmic accountability: The role of machine learning copies

Irene Unceta, Jordi Nin, Oriol Pujol

Producció científica: Article en revista indexadaArticleAvaluat per experts

5 Cites (Scopus)

Resum

Machine learning plays an increasingly important role in our society and economy and is already having an impact on our daily life in many different ways. From several perspectives, machine learning is seen as the new engine of productivity and economic growth. It can increase the business efficiency and improve any decision-making process, and of course, spawn the creation of new products and services by using complex machine learning algorithms. In this scenario, the lack of actionable accountability-related guidance is potentially the single most important challenge facing the machine learning community. Machine learning systems are often composed of many parts and ingredients, mixing third party components or software-as-a-service APIs, among others. In this paper we study the role of copies for risk mitigation in such machine learning systems. Formally, a copy can be regarded as an approximated projection operator of a model into a target model hypothesis set. Under the conceptual framework of actionable accountability, we explore the use of copies as a viable alternative in circumstances where models cannot be re-trained, nor enhanced by means of a wrapper. We use a real residential mortgage default dataset as a use case to illustrate the feasibility of this approach.

Idioma originalAnglès
Número d’articlee0241286
RevistaPLoS ONE
Volum15
Número11 November
DOIs
Estat de la publicacióPublicada - de nov. 2020
Publicat externament

Fingerprint

Navegar pels temes de recerca de 'Risk mitigation in algorithmic accountability: The role of machine learning copies'. Junts formen un fingerprint únic.

Com citar-ho