Skip to content

Latest commit

 

History

History
34 lines (26 loc) · 2.84 KB

nlp.md

File metadata and controls

34 lines (26 loc) · 2.84 KB

Awesome Machine Learning Fairness - Natural Language Processing

Survey

  1. Language (Technology) is Power: A Critical Survey of “Bias” in NLP, ACL'20

Algorithm

  1. MABEL: Attenuating Gender Bias using Textual Entailment Data, EMNLP'22
  2. Features or Spurious Artifacts? Data-centric Baselines for Fair and Robust Hate Speech Detection, NAACL'22
  3. Towards Understanding and Mitigating Social Biases in Language Models, ICML'21
  4. FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders, ICLR'21
  5. The Effect of Round-Trip Translation on Fairness in Sentiment Analysis, ACL'21
  6. Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation, ACL'20
  7. Towards Debiasing Sentence Representations, ACL'20
  8. Gender Bias in Contextualized Word Embeddings, NAACL'19
  9. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings, NeurIPS'16

Evaluation

  1. Holistic Evaluation of Language Models, arXiv'22
  2. On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations, ACL'22
  3. Measuring Fairness with Biased Rulers: A Comparative Study on Bias Metrics for Pre-trained Language Models, NAACL'22
  4. BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation, FAccT'21
  5. Stereotyping Norwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets, ACL'21
  6. StereoSet: Measuring stereotypical bias in pretrained language models, ACL'21
  7. CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models, EMNLP'20
  8. On Measuring Social Biases in Sentence Encoders, NAACL'19
  9. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods, NAACL'18
  10. Gender Bias in Coreference Resolution, NAACL'18

Pre-training

  1. Measuring and Reducing Gendered Correlations in Pre-trained Models, arXiv'21