In this paper, the authors provide an overview of 304 papers that discuss gender bias in natural language processing (NLP). They start by explaining their methodology and examining the development of the field in popular NLP venues. Then, they explore different definitions of gender in society. They also define gender bias and sexism, specifically in the context of NLP, while considering the ethical implications. The authors compile common lexica and datasets that are used for researching gender bias. They discuss formal definitions of gender bias and the methods developed for detecting and mitigating it.

The paper highlights how gender bias and sexism manifest in natural language and influence various downstream tasks. Language serves as a powerful tool for expressing gender bias, and biases from the source data can be transferred to algorithms, which may further amplify existing cultural prejudices and inequalities.

The authors emphasize that gender bias in NLP models can have harmful consequences for end users. It can result in representational and allocational harms as well as gender gaps. Structural bias occurs when sentence construction exhibits patterns closely tied to gender bias, including gender generalization and explicit labeling of sex. On the other hand, contextual bias…

The paper also discusses bias amplification, which refers to the ability of NLP models not only to perpetuate existing biases in language but also to amplify them. Previous research has demonstrated this phenomenon, where gender bias correlations can be potentially amplified by the model.

Overall, the paper provides a comprehensive overview of gender bias in NLP, its impact on downstream tasks, and the methods developed for detection and mitigation.