Assessing Gender Bias in Machine Translation -- A Case Study with Google Translate
September 06, 2018 ยท Declared Dead ยท ๐ Neural computing & applications (Print)
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Marcelo O. R. Prates, Pedro H. C. Avelar, Luis Lamb
arXiv ID
1809.02208
Category
cs.CY: Computers & Society
Cross-listed
cs.CL
Citations
381
Venue
Neural computing & applications (Print)
Last Checked
2 months ago
Abstract
Recently there has been a growing concern about machine bias, where trained statistical models grow to reflect controversial societal asymmetries, such as gender or racial bias. A significant number of AI tools have recently been suggested to be harmfully biased towards some minority, with reports of racist criminal behavior predictors, Iphone X failing to differentiate between two Asian people and Google photos' mistakenly classifying black people as gorillas. Although a systematic study of such biases can be difficult, we believe that automated translation tools can be exploited through gender neutral languages to yield a window into the phenomenon of gender bias in AI. In this paper, we start with a comprehensive list of job positions from the U.S. Bureau of Labor Statistics (BLS) and used it to build sentences in constructions like "He/She is an Engineer" in 12 different gender neutral languages such as Hungarian, Chinese, Yoruba, and several others. We translate these sentences into English using the Google Translate API, and collect statistics about the frequency of female, male and gender-neutral pronouns in the translated output. We show that GT exhibits a strong tendency towards male defaults, in particular for fields linked to unbalanced gender distribution such as STEM jobs. We ran these statistics against BLS' data for the frequency of female participation in each job position, showing that GT fails to reproduce a real-world distribution of female workers. We provide experimental evidence that even if one does not expect in principle a 50:50 pronominal gender distribution, GT yields male defaults much more frequently than what would be expected from demographic data alone. We are hopeful that this work will ignite a debate about the need to augment current statistical translation tools with debiasing techniques which can already be found in the scientific literature.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computers & Society
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Artificial Intelligence: the global landscape of ethics guidelines
R.I.P.
๐ป
Ghosted
The role of artificial intelligence in achieving the Sustainable Development Goals
R.I.P.
๐ป
Ghosted
Green AI
R.I.P.
๐ป
Ghosted
Principles alone cannot guarantee ethical AI
R.I.P.
๐ป
Ghosted
Tackling Climate Change with Machine Learning
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted