Ensuring classification models are fair with respect to sensitive data attributes is a crucial task when applying machine learning models to real-world problems. Particularly in company production environments, where the decision output by models may have a direct impact on individuals and predictive performance should be maintained over time. In this article, build upon , we propose copies as a technique to mitigate the bias of trained algorithms in circumstances where the original data is not accessible and/or the models cannot be re-trained. In particular, we explore a simple methodology to build copies that replicate the learned decision behavior in the absence of sensitive attributes. We validate this methodology in the low-sensitive problem of superhero alignment. We demonstrate that this naïve approach to bias reduction is feasible in this problem and argue that copies can be further exploited to embed models with desiderata such as fair learning.