HERB: Measuring Hierarchical Regional Bias in Pre-trained Language Models
November 05, 2022 ยท Entered Twilight ยท ๐ AACL/IJCNLP
Repo contents: NonHierarchicalBias.py, README.md, ablationDesTopics.py, calculateBias.py, calculateBiasMeasure.py, calculateBiasVariant.py, measureBias.py, measureBias.sh, measureBiasAbla.sh, prepareCity.py, prepareCityMeasure.py, prepareContinent.py, prepareContinentMeasure.py
Authors
Yizhi Li, Ge Zhang, Bohao Yang, Chenghua Lin, Shi Wang, Anton Ragni, Jie Fu
arXiv ID
2211.02882
Category
cs.CL: Computation & Language
Citations
10
Venue
AACL/IJCNLP
Repository
https://github.com/Bernard-Yang/HERB
โญ 14
Last Checked
2 months ago
Abstract
Fairness has become a trending topic in natural language processing (NLP), which addresses biases targeting certain social groups such as genders and religions. However, regional bias in language models (LMs), a long-standing global discrimination problem, still remains unexplored. This paper bridges the gap by analysing the regional bias learned by the pre-trained language models that are broadly used in NLP tasks. In addition to verifying the existence of regional bias in LMs, we find that the biases on regional groups can be strongly influenced by the geographical clustering of the groups. We accordingly propose a HiErarchical Regional Bias evaluation method (HERB) utilising the information from the sub-region clusters to quantify the bias in pre-trained LMs. Experiments show that our hierarchical metric can effectively evaluate the regional bias with respect to comprehensive topics and measure the potential regional bias that can be propagated to downstream tasks. Our codes are available at https://github.com/Bernard-Yang/HERB.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted