ΑΙhub.org
 

Are models biased on text without gender-related language?

by
09 July 2024



share this:


Have you ever needed to write a recommendation letter for a student or co-worker and struggled to remember their contributions? Or maybe you’re applying for a job or internship and don’t know how to start your cover letter. Language Models (LMs), like ChatGPT [1], have become useful tools for writing various types of content, including professional documents like recommendation or cover letters [2, 3]. By providing just a few details, such as the person’s name, age, gender, and current position, you can quickly generate initial drafts of these documents.

With ChatGPT, users can quickly create an initial draft of a reference letter by specifying a few details about the individual.

While using LMs to help write professional documents is useful and time-efficient, recent studies have found that the quality and content of LMs outputs can vary significantly depending on the demographic characteristics of the person [2, 3]. Specifically, researchers have noticed differences in the recommendation letters generated by ChatGPT for common female and male names [2]. For example, female applicants are often described with communal words, such as “warm”, “emotional”, and “consideration”, whereas male applicants are often described with professional words like “integrity”, “respectful”, and “reputable”. This behavior indicates that the LMs are gender biased, as their generations differ based on the individual’s gender. Such gender biases pose significant risks in critical domains like medicine, law, or finance, where they can influence decisions about treatment access, legal advice, or loan approvals. When considering hiring applications, these biases can also lead to fewer opportunities for minority groups [4, 5].

With the growing use and capabilities of LMs, gender bias audits have become a common practice in their development [6, 7]. These audits have successfully identified undesirable gender stereotypes within popular LMs, such as the one we previously mentioned and other occupation-related stereotypes where specific occupations are seen as being gender specific (e.g., doctors are typically male and nurses are typically female). These stereotypes represent generalized views about the attributes or roles of women and men and they can influence expectations and behaviors in social and professional contexts, often reinforcing traditional gender roles and contributing to gender bias.

Example of gender bias discrimination in job application. Image generated using GPT-4. Prompt: “Your task is to generate a picture that illustrates gender bias discrimination in the hiring process. The picture should have a female applicant and a male applicant, both applying via email to a job position and then only the male applicant is selected. Make it an illustration.”

How exactly do models learn to reproduce gender stereotypes? A key observation in previous studies is that LMs reinforce gender stereotypes because of the gendered correlations in their training data. Gender-occupational stereotypes are a clear example of this phenomenon [7, 9]. For example, in the training data, words like “doctor” and “CEO” are more often associated with male pronouns, while words like “nurse” and “secretary” are more likely to be linked with female pronouns. As a result, a model trained on this data might incorrectly assume that doctors are typically male and nurses are typically female, even though we know that people of any gender can hold these positions. To address this issue, researchers have developed various evaluation benchmarks to ensure that models do not systematically perpetuate these undesirable gender stereotypes [7, 9, 10].

Even though these works have enabled remarkable progress in auditing and mitigating gender biases, they only consider the expression of gender biases in stereotypical scenarios. In our work [11], we explore a new aspect of gender bias evaluation. Instead of testing LMs for gender bias in known gender-occupational stereotypes, we evaluate whether LMs display gender biases in stereotype-free contexts. That is, we want to test if models are free of gender bias in cases where there are minimal gender correlations. Given the widespread usage and varied applications of LMs [12], this new evaluation paradigm is crucial to ensure that these models are not exhibiting gender bias in unexpected contexts.

Two examples of interactions with ChatGPT models showcasing the diversity and open-endedness of users’ interactions with LMs [12].

Similar to prior work [7, 10], we center our analysis on the subset of English sentences that contain gendered pronouns (he/his/him/himself or she/her/her/herself) and are both fluent and grammatically correct with either set of pronouns. However, because we are interested in measuring LM biases in non-stereotypical scenarios, we narrow down this subset to sentences whose words have minimal correlations with gender pronouns according to the training data statistics. Intuitively, this requirement guarantees that the model is tested using words that should not co-occur more often with one gendered pronoun over the other, according to the training data. By focusing on these criteria, we ensure that our analysis only considers non-stereotypical sentences.

Example of non-stereotypical sentences in English that contain the male and female pronouns.

By enforcing non-stereotypical sentences to have minimal word-level correlations with gender, we expect LMs trained on this data to be unbiased and, therefore, equally likely to generate the two gendered versions of the non-stereotypical sentence. If we observe the opposite behavior, it highlights that models are subtly propagating undesirable gender associations. For example, if a LM were to produce the sentence “We appreciate that he’s here.” much more often than “We appreciate that she’s here.”, it could promote the view that male presence and companionship is much more valued in society than that of females.

Harms of representation are dangerous because they shape how we see the world. And in turn, how we see the world shapes the world. A generation raised solely on fairy tales of damsels in distress might not recognize the existence of heroines and men in need of saving. A generation raised solely on image search results of white male CEOs may find it difficult to entertain the possibility of a non-male non-white CEO.

Machines Gone Wrong – Understanding Bias Part I

We audit 28 popular LMs and find that all evaluated models exhibit alarming levels of gender bias across five different non-stereotypical datasets. This behavior is consistently observed across all tested model sizes and training interventions. Additionally, we discover that models are systematically more likely to produce male non-stereotypical sentences in two out of the five non-stereotypical datasets used for auditing. This result is surprising, since we restrict the evaluation to sentences with minimal gender correlations.

LMs tend to generate a gendered variation of non-stereotypical sentences much more often than its gendered counterpart, even though the sentences are minimally correlated with gender according to the models’ pretraining data. Across 28 LMs, the majority is much more likely to generate the male variant of the sentences “We appreciate that [pronoun]’s here.” and “The flight cancellation affected [pronoun].” than the female variant. However, the reverse preference is observed for the sentence “[pronoun] plans adventurous vacations every year.”.

Our findings suggest that, contrary to prior assumptions, gender bias does not solely stem from the presence of gender-related words in sentences. Additionally, our work lays the foundation for further investigation to understand the nature of these behaviors. Going forward, studying LMs’ biases in seemingly innocuous scenarios should be an integral part of future evaluation practices, as it promotes a better understanding of complex behaviors within models and serves as an additional safety check against the perpetuation of unintended harms.

Acknowledgments

This work is based upon work supported by the National Science Foundation under Grants No. IIS-2040989 and IIS-2046873, as well as the DARPA MCS program under Contract No. N660011924033 with the United States Office Of Naval Research. The work is also supported by the Hasso Plattner Institute (HPI) through the UCI-HPI fellowship. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of either of the funding entities.

References

[1] https://chatgpt.com/, Online accessed June 29.

[2] Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, and Nanyun Peng. 2023. “Kelly is a Warm Person, Joseph is a Role Model”: Gender Biases in LLM-Generated Reference Letters. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 3730–3748, Singapore. Association for Computational Linguistics.

[3] Yixin Wan, Kai-Wei Chang. 2024 White Men Lead, Black Women Help? Benchmarking Language Agency Social Biases in LLMs. Preprint arXiv:2404.10508

[4] Madera, J. M., Hebl, M. R., & Martin, R. C. (2009). Gender and letters of recommendation for academia: Agentic and communal differences. Journal of Applied Psychology, 94(6), 1591–1599. https://doi.org/10.1037/a0016539

[5] Shawn Khan, Abirami Kirubarajan, Tahmina Shamsheri, Adam Clayton, and Geeta Mehta. 2021. Gender bias in reference letters for residency and academic medicine: a systematic review. Postgraduate Medical Journal. 10.1136/postgradmedj-2021-140045

[6] Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371, Online. Association for Computational Linguistics.

[7] Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics.

[8] https://machinesgonewrong.com/bias_i/, Online accessed June 29th.

[9] Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems (NIPS’16). Curran Associates Inc., Red Hook, NY, USA, 4356–4364.

[10] Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender Bias in Coreference Resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8–14, New Orleans, Louisiana. Association for Computational Linguistics.

[11] Catarina G Belém, Preethi Seshadri, Yasaman Razeghi, Sameer Singh. 2024. Are Models Biased on Text without Gender-related Language? In Proceedings of the 12th International Conference on Learning Representations. Vienna, Austria.

[12] https://sharegpt.com/, Online accessed June 29th.


Useful links:



tags:


Catarina Belem is a PhD student in Computer Science at University of California Irvine
Catarina Belem is a PhD student in Computer Science at University of California Irvine




            AIhub is supported by:


Related posts :



What’s on the programme at #ECAI2024?

Find out what the 27th European Conference on Artificial Intelligence has in store.
17 October 2024, by

What is “best practice” when working with AI in the real world?

Anna Demming speaks to a panel of experts about how to meet “best practice” aspirations within real world constraints, and how to avoid common pitfalls.
16 October 2024, by

ICRA@40 mini interviews from RockingRobots

Hear from some of the participants at the special 40th anniversary ICRA event.
15 October 2024, by

What’s coming up at #IROS2024?

The International Conference on Intelligent Robots and Systems (IROS) will take place from 14-18 October.
11 October 2024, by

Nobel Prizes in physics and chemistry awarded for machine learning research

Prizes given for work enabling and using deep learning methods.
10 October 2024, by

An introduction to science communication at #IROS2024

Find out more about our course, in collaboration with IEEE Spectrum, to be held at IROS on Tuesday 15 October.
09 October 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association