Finding Pareto trade-offs in fair and accurate detection of toxic speech

Soumyajit Gupta, Venelin Kovatchev, Anubrata Das, Maria De-Arteaga, Matthew Lease

Producció científica: Article en revista indexadaArticleAvaluat per experts

Resum

Introduction. Optimizing NLP models for fairness poses many challenges. Lack of differentiable fairness measures prevents gradient-based loss training or requires surrogate losses that diverge from the true metric of interest. In addition, competing objectives (e.g., accuracy vs. fairness) often require making trade-offs based on stakeholder preferences, but stakeholders may not know their preferences before seeing system performance under different trade-off settings. Method. We formulate the GAP loss, a differentiable version of a fairness measure, Accuracy Parity, to provide balanced accuracy across binary demographic groups. Analysis. We show how model-agnostic, HyperNetwork optimization can efficiently train arbitrary NLP model architectures to learn Pareto-optimal trade-offs between competing metrics like predictive performance vs. group fairness. Results. Focusing on the task of toxic language detection, we show the generality and efficacy of our proposed GAP loss function across two datasets, three neural architectures, and three fairness loss functions. Conclusion. Our GAP loss for the task of TL detection demonstrates promising results-improved fairness and computational efficiency. Our work can be extended to other tasks, datasets, and neural models in any practical situation where ensuring equal accuracy across different demographic groups is a desired objective.

Idioma originalAnglès
Pàgines (de-a)123-141
Nombre de pàgines19
RevistaInformation Research
Volum30
NúmeroiConf 2025
DOIs
Estat de la publicacióPublicada - de març 2025
Publicat externament

Fingerprint

Navegar pels temes de recerca de 'Finding Pareto trade-offs in fair and accurate detection of toxic speech'. Junts formen un fingerprint únic.

Com citar-ho