Should we expect a reinforcement of gender stereotypes with artificial intelligence ?
Illustrator Nato Tardieu
By Deborah Rouach
Artificial intelligence could have offered the possibility to humanity to create an intelligence free of any belief, any prejudice and any stereotype. However, this is to forget that it is not an autonomous intelligence but the concept of human beings with a narrow and biased perception of reality. Moreover, artificial intelligence is unable to explain or perceive the gender discriminations it harbors. It is to be feared that it will inadvertently exacerbate existing gender biases and reinforce gender inequalities.
Why artificial intelligence adopts gender stereotypes ?
” Like all technologies before it, artificial intelligence will reflect the values of its creators. CRAWFORD Kate, « Artificial Intelligence’s White Guy Problem », 25 juin 2016, New York Times, available on : … Continue reading ” In 2016, researcher Kate Crawford highlighted the risks of AI soaked in the thinking of its designers. Although the latter, mainly men, boast about the neutrality of their program, they consciously or unconsciously project their imagination on their creation. Computer scientist Joanna Bryson reminds us that AI embodies an extension of our culture. An AI Now report published in 2018 reveals that there is a growing consensus about the lack of neutrality and objectivity of artificial intelligence systems.
However, it is more difficult to recognise the connection between the discriminatory products of the AI industry and their creators who are men. With this in mind, researcher Rachel Adams supports the existence of ” an essential link between the development of AI systems that present gender bias and the lack of women in the teams that design them “. The 2018 Global Gender Gap report at the World Economic Forum already exposed the overrepresentation of men in the design and development of AI technologies with only 22 % of women.
Based on machine learning, AI internalizes an outdated vision of society that transcribes the inability of humans to overcome prejudices rooted in their perception of the world. An AI, during its conception, integrates three levels of preconceived ideas : the processed data impregnated with human prejudices, the algorithms which learn data from the real world and the people who participate in its development and in its perception of the world.
Virtual personal assistants, the majority of whom are women, are proof of this. Whether given their name, Alexa, Cortana, Siri, their voice and their programmed flirting, these assistants reproduce discriminatory stereotypes towards women and participate in anchoring in our conceptions that women are subordinates, in addition to being helpful and to obey. Besides, the digitisation of the profession of secretary, predominantly female in reality, can raise expectations as to how real women should behave.
Physical robots embody more visibly gender stereotypes such as those with curves and feminine voices that animate exhibitions and those that accompany patients in healthcare establishments. In comparison, technical chatbots are often male. There is thus a gendered reproduction of the various trades through the use of AI which exacerbates the rigid categorisation of our society based on the division of tasks in relation to gender. In addition, this anthropomorphisation of non-human entities can be explained in particular as an attempt to understand and control artificial intelligence using experiences from the human world. However, admitting that the feminisation of robots is a wish of customers in order to feel more confident compared to a male robot which would appear worrying, one cannot overlook the fact that artificial intelligence contributes to increase sexism and misogyny present in our societies.
What implications ?
The risk would be that over time we consider AI as objective and delivering a new standard devoid of our human faults when it would be its quintessence. Revolutionary, AI is brought to be present in all spheres of our daily life and to have an increasingly important place. It could therefore participate in conditioning men and women to conform to traditional gender roles through algorithms which consolidate these stereotypes.
Reproducing an outdated vision of society would set back a step from advances in gender equality, reinforcing, in fact, the structure of patriarchal domination. Thus, an AI not aware of gender equality and diversity notions will likely favor only men during job interview, as it was the case for Google Hire, or perpetuate the pay gap by targeting announcements of better paid jobs to men. Face recognition software using AI could also cause harm to people who do not identify themselves by binary gender standards. These findings are all the more worrying as a growing number of companies are using AI, which so far has dangerously amplified gender-based discrimination.
How to develop fair AI ?
In order to rectify potential harms caused by misogynistic and unethical AI, different factors must be acted upon. First, it must be taken into account that gender ideology is rooted in language. The data and algorithms that power AI machine learning must therefore be based on text corpora, training samples and programs that promote gender diversity and equality and incorporate concepts of equity. As for the teams designing software and algorithms, they must represent greater diversity and be enriched by the presence of female professionals. Especially since it is mainly women who seek to solve the problem of the lack of consideration for gender equality in AI.
Notions such as neutrality, transparency, inclusiveness are taking a growing place at the heart of reflection on the consequences of AI on our societies. While there is currently no consensus standard for determining whether an AI is sexist, nevertheless consulting firms offer audits to companies wishing to verify the fairness of their algorithms. The realisation of the importance of taking an interdisciplinary approach to artificial intelligence is therefore encouraging for the future.
ADAMS Rachel, « Artificial Intelligence has a gender bias problem – just ask Siri », 22 septembre 2019, The Conversation, available on : https://theconversation.com/artificial-intelligence-has-a-gender-bias-problem-just-ask-siri-123937
CHARATAN Debrah, « How more women in AI could change the world », 15 avril 2018, VentureBeat, available on : https://venturebeat.com/2018/04/15/how-more-women-in-ai-could-change-the-world/
CRAWFORD Kate, « Artificial Intelligence’s White Guy Problem », 25 juin 2016, New York Times, available on : https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html
FEAST Josh, « 4 Ways to Address Gender Bias in AI », 20 novembre 2019, Harvard Business Review, available on : https://hbr.org/2019/11/4-ways-to-address-gender-bias-in-ai
LEAVY Susan, Gender bias in artificial intelligence: the need for diversity and gender theory in machine learning, University College Dublin, mai 2018.
LEGROS Claire, « Les études de genre se penchent sur le sexe des robots », 25 septembre 2018, Le Monde, available on : https://www.lemonde.fr/festival/article/2018/09/25/les-etudes-de-genre-se-penchent-sur-le-sexe-des-robots_5359786_4415198.html
LEGROS Claire, propos de BERNHEIM Aude et VINCENT Flora, 03 mars 2019, Le Monde, available on : https://www.lemonde.fr/economie/article/2019/03/03/le-manque-de-femmes-dans-l-intelligence-artificielle-accroit-le-risque-de-biais-sexistes_5430820_3234.html
Rapport AI Now, décembre 2018, available on : https://ainowinstitute.org/AI_Now_2018_Report.pdf
Rapport Global Gender Gap 2018, Forum économique mondial, available on : Global Gender Gap Report (2018),
RAUCH Isabelle, « Si l’on n’y prend garde, l’intelligence artificielle reproduira nos stéréotypes de genre », Tribune du 07 février 2020, Le Monde, available on : https://www.lemonde.fr/idees/article/2020/02/07/si-l-on-n-y-prend-garde-l-intelligence-artificielle-reproduira-nos-stereotypes-de-genre_6028811_3232.html
REDDY Deepti, « Breaking Gender Bias in Artificial Intelligence », 17 avril 2017, Medium, available on : https://medium.com/my-ally/breaking-gender-bias-in-artificial-intelligence-c3c143038c20
To cite this article : Deborah Rouach, ” Should we expect a reinforcement of gender stereotypes with artificial intelligence ? “, 12.03.2020, Gender in Geopoliti
|↑1||CRAWFORD Kate, « Artificial Intelligence’s White Guy Problem », 25 juin 2016, New York Times, available on : https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html|