- MABEL: Attenuating Gender Bias using Textual Entailment Data, EMNLP'22
- Features or Spurious Artifacts? Data-centric Baselines for Fair and Robust Hate Speech Detection, NAACL'22
- Towards Understanding and Mitigating Social Biases in Language Models, ICML'21
- FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders, ICLR'21
- The Effect of Round-Trip Translation on Fairness in Sentiment Analysis, ACL'21
- Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation, ACL'20
- Towards Debiasing Sentence Representations, ACL'20
- Gender Bias in Contextualized Word Embeddings, NAACL'19
- Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings, NeurIPS'16
- Holistic Evaluation of Language Models, arXiv'22
- On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations, ACL'22
- Measuring Fairness with Biased Rulers: A Comparative Study on Bias Metrics for Pre-trained Language Models, NAACL'22
- BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation, FAccT'21
- Stereotyping Norwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets, ACL'21
- StereoSet: Measuring stereotypical bias in pretrained language models, ACL'21
- CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models, EMNLP'20
- On Measuring Social Biases in Sentence Encoders, NAACL'19
- Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods, NAACL'18
- Gender Bias in Coreference Resolution, NAACL'18