TY - GEN
T1 - What are the biases in my word embedding?
AU - Swinger, Nathaniel
AU - De-Arteaga, Maria
AU - Thomas Heffernan, Neil
AU - Leiserson, Mark D.M.
AU - Kalai, Adam Tauman
N1 - Publisher Copyright:
© 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2019/1/27
Y1 - 2019/1/27
N2 - This paper presents an algorithm for enumerating biases in word embeddings. The algorithm exposes a large number of offensive associations related to sensitive features such as race and gender on publicly available embeddings. These biases are concerning in light of the widespread use of word embeddings. The associations are identified by geometric patterns in word embeddings that run parallel between people's names and common lower-case tokens. The algorithm is highly unsupervised: it does not even require the sensitive features to be pre-specified. This is desirable because: (a) many forms of discrimination-such as racial discrimination-are linked to social constructs that may vary depending on the context, rather than to categories with fixed definitions; and (b) it makes it easier to identify biases against intersectional groups, which depend on combinations of sensitive features. The inputs to our algorithm are a list of target tokens, e.g. names, and a word embedding. It outputs a number of Word Embedding Association Tests (WEATs) that capture various biases present in the data. We illustrate the utility of our approach on publicly available word embeddings and lists of names, and evaluate its output using crowdsourcing. We also show how removing names may not remove potential proxy bias.
AB - This paper presents an algorithm for enumerating biases in word embeddings. The algorithm exposes a large number of offensive associations related to sensitive features such as race and gender on publicly available embeddings. These biases are concerning in light of the widespread use of word embeddings. The associations are identified by geometric patterns in word embeddings that run parallel between people's names and common lower-case tokens. The algorithm is highly unsupervised: it does not even require the sensitive features to be pre-specified. This is desirable because: (a) many forms of discrimination-such as racial discrimination-are linked to social constructs that may vary depending on the context, rather than to categories with fixed definitions; and (b) it makes it easier to identify biases against intersectional groups, which depend on combinations of sensitive features. The inputs to our algorithm are a list of target tokens, e.g. names, and a word embedding. It outputs a number of Word Embedding Association Tests (WEATs) that capture various biases present in the data. We illustrate the utility of our approach on publicly available word embeddings and lists of names, and evaluate its output using crowdsourcing. We also show how removing names may not remove potential proxy bias.
UR - https://www.scopus.com/pages/publications/85070606273
U2 - 10.1145/3306618.3314270
DO - 10.1145/3306618.3314270
M3 - Conference contribution
AN - SCOPUS:85070606273
T3 - AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society
SP - 305
EP - 311
BT - AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society
PB - Association for Computing Machinery, Inc
T2 - 2nd AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019
Y2 - 27 January 2019 through 28 January 2019
ER -