Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias
December 21, 2022 ยท Declared Dead ยท ๐ Conference on Fairness, Accountability and Transparency
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Robert Wolfe, Yiwei Yang, Bill Howe, Aylin Caliskan
arXiv ID
2212.11261
Category
cs.CY: Computers & Society
Cross-listed
cs.AI,
cs.CL,
cs.CV,
cs.LG
Citations
74
Venue
Conference on Fairness, Accountability and Transparency
Last Checked
2 months ago
Abstract
Nine language-vision AI models trained on web scrapes with the Contrastive Language-Image Pretraining (CLIP) objective are evaluated for evidence of a bias studied by psychologists: the sexual objectification of girls and women, which occurs when a person's human characteristics, such as emotions, are disregarded and the person is treated as a body. We replicate three experiments in psychology quantifying sexual objectification and show that the phenomena persist in AI. A first experiment uses standardized images of women from the Sexual OBjectification and EMotion Database, and finds that human characteristics are disassociated from images of objectified women: the model's recognition of emotional state is mediated by whether the subject is fully or partially clothed. Embedding association tests (EATs) return significant effect sizes for both anger (d >0.80) and sadness (d >0.50), associating images of fully clothed subjects with emotions. GRAD-CAM saliency maps highlight that CLIP gets distracted from emotional expressions in objectified images. A second experiment measures the effect in a representative application: an automatic image captioner (Antarctic Captions) includes words denoting emotion less than 50% as often for images of partially clothed women than for images of fully clothed women. A third experiment finds that images of female professionals (scientists, doctors, executives) are likely to be associated with sexual descriptions relative to images of male professionals. A fourth experiment shows that a prompt of "a [age] year old girl" generates sexualized images (as determined by an NSFW classifier) up to 73% of the time for VQGAN-CLIP and Stable Diffusion; the corresponding rate for boys never surpasses 9%. The evidence indicates that language-vision AI models trained on web scrapes learn biases of sexual objectification, which propagate to downstream applications.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computers & Society
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Artificial Intelligence: the global landscape of ethics guidelines
R.I.P.
๐ป
Ghosted
The role of artificial intelligence in achieving the Sustainable Development Goals
R.I.P.
๐ป
Ghosted
Green AI
R.I.P.
๐ป
Ghosted
Principles alone cannot guarantee ethical AI
R.I.P.
๐ป
Ghosted
Tackling Climate Change with Machine Learning
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted