Hardwired: Gender bias in robots and artificial intelligence

10 min read

There is a common ground between the AI Sofia, the chatbot Mitsuku, the Twitter bot Tay, and personal assistant robots such as Alexa, Siri, Cortana and Xiaoice. They are all gendered, and they are all female. It’s possible to argue Siri can be programmed to have a male voice, but by default her voice is female as is her name. The chatbot Mitsuku[1] (four-times winner of the Loebner Prize Competition) is depicted as a female anime character and, when asked what she would do with money, answers she would “buy a handbag”. Although she is a gender-neutral algorithm accessible online, that is not how she is portrayed, or even how she is programmed to respond.

This is a growing trend. Airlines have started working with ladylike chatbots to help deal with customer service online: you can chat to Jenn if you’re flying with Alaska Airlines, or Alex if you’d rather fly United Airlines. Most GPS systems will have a female voice by default.[2] Academics have noticed this trend, and have started to proliferate on robots and their perceived genders, and how this impacts Human-Robot Interaction.

Karl MacDorman[3], a professor at Indiana University’s School of Informatics and Computing, studied people’s reactions to female and male voices. It seems both men and women agree a female voice is more agreeable. If customers are more likely to buy a robot with a female voice, it isn’t surprising that the market offers female-gendered robots. Although this might justify why most robots have a female voice, it doesn’t explain why chatbots which communicate in a written chatbox also need to be female.

Ex Machina (2014, dir. Alex Garland)

This cultural bias towards gendered robots, especially female robots, is also expressed in films, videogames and literature. There is a wealth of female robots today – from Glados in the “Portal” games, to Scarlett Johansson-voiced Samantha “Her”, to “Ex Machina”, and the ultimate wifebot introduced in “Blade Runner 2049”.

This idea of a female android is actually as old as the word “android” itself. In 1886, Villiers de l’Isle-Adam coined the term “andréïde” in his novel “The Future Eve”[4]. In this particularly misogynistic story, the main character creates a woman-like machine called Eve. In effect, the novel is about the first sexbot, which will have the physical qualities of the woman it’s based on but not her difficult character. The Biblical implications of the title send us back to an even older mythos, the one of the man-made docile woman, as opposed to the first wild woman.

According to the Babylonian Talmud, when God created Adam, he made Lilith out of the same earth. This first woman considered herself Adam’s equal, which was cause enough to banish her from paradise. Depending on the sources, she sometimes left of her own volition:

He then created a woman for Adam, from the earth, as He had created Adam himself, and called her Lilith. Adam and Lilith immediately began to fight. […] When Lilith saw this, she pronounced the Ineffable Name and flew away into the air. Adam stood in prayer before his Creator: ‘Sovereign of the universe!’ he said, ‘the woman you gave me has run away.’ At once, the Holy One, blessed be He, sent these three angels to bring her back[5].

But Lilith refused to return. To replace her, God took one of Adam’s ribs and made Eve. Eve is shaped from Adam, as opposed to Lilith who is from the same soil. Of course, Eve will turn out to be problematic too, and there is no need to dwell on the unfortunate business with the apple. But it is important to note that believing that robots will, ultimately, lead to humanity’s destruction became an integral part of our mythos. So, after banishing Lilith/women for being troublesome, men are left with Eve/Robots who will lead to their demise.

This idea of a submissive man-made female robot keeps re-emerging after Villiers de L’Isle-Adam. “The Stepford Wives” is a 1972 satirical novel[6] by Ira Levin which tackles exactly that. The Stepford wives are robot-wives, which have been made to be entirely controlled by men. And that’s what robots are. They are programmed to be powerless. The robots we interact with in our everyday lives fulfil traditionally female duties expected from a submissive wife; as such, they tend to be social robots. Alexa, Siri, Cortana and Xiaoice are all personal assistants. Those characteristics stem from traditionally female clichés: entertaining guests, working as a receptionist or assistant, supporting your male lead. It isn’t reassuring that customers prefer giving orders to a female robot, or get into the habit of doing so. What’s even more worrying is that an issue underlined in the 70s – long before Alexa and Siri were in people’s homes, doing their owner’s bidding – is still relevant.

These founding myths, and the religious imagery attached to them, shape the beliefs and understanding of societies with Judeo-Christian heritage. They may be impacting the way people within those societies relate to robots. Despite human-computer interaction being an entirely new domain where everything is still to build, archaic beliefs are already influencing the way we manage those relationships.

IBM Watson

That is also true of male robots/chatbots/AI, both in reality (Watson[7], Ross[8]) and in fiction (The Terminator, HAL 9000). If the word “android” brings us back to a female AI, the word “robot” itself is more gender-neutral, and can be traced back to another creature from the Jewish folklore – the golem. The term “robot” comes from the Czech “robota”, which means forced labour or slave. It becomes a generalised science-fiction term thanks to the play “Rossum’s Universal Robots” by Karel Čapek, first published in 1921. Although Karel Čapek doesn’t say he modelled the robots on the figure of the golem, the similarities are striking.

Conceptually, golems easily compare to robots. A golem is manufactured, and it has a scroll or tablet with a list of letters (most often a shem) placed inside its head. In the same way, a robot is manufactured, and obeys a code, which is a series of letters/numbers placed inside it. Golems can be turned on and off by placing or removing the letters – they are the only monsters of folklore with a switch button. The reason why they are of interest to us here is because golems are either gender-neutral or male: here is the counterpart to the docile female android, but it is a sturdy, strong, and in most myths ultimately destructive, male robot.

These male robots are fulfilling traditional male roles – be it the role of the villain. If the robot is used for physical labour, soldiering or leadership, chances are it will be portrayed as male. Female androids are made for pleasure; male robots are made for factory work. In the same way, real-life AI tends to be male if it is trying to inspire confidence or impress its authority on the humans using it. Watson is IBM’s AI, and ‘he’ works in business. Ross is qualified as the “first artificially intelligent attorney”. This isn’t surprising, as law and business are traditionally male-dominated fields.

Just take Cortana for an unnecessarily sexualized AI

Where non-gendered AI could help fight long-standing prejudice, gendering robots only serves to reinforce existing stereotypes. Male robots will give you advice on how to manage your business, or instructions on how to construct a case. Female robots will turn on the music for you.

If this feels bad, there is worse to come. Not only are we prejudiced in the way we interact with robots; we build prejudice into our robots, who become prejudiced in turn. It would be nice to imagine robots wouldn’t suffer from the same bias as humans, but they are, after all, imagined, coded and produced by humans. Mostly, it has to be noted, – by men. Tech is still a male-dominated area of study and industry.

It has already been shown that robots – from complex AI to research algorithms – express bias[9]. Carnegie Mellon University researchers proved that the Google algorithms will favour male over female profiles for better-paid and upper-level jobs. Guillaume Chaslot showed that the Youtube algorithm is also biased when it chooses the videos it offers to play next[10].

The problem arises, in part, from the ways AI learns. Machine learning consists in providing AI with data (a set of examples from the situation the AI needs to learn from). Basing itself on that data, the AI will decide which is the best behaviour to adopt each time it encounters a similar situation. This form of machine learning is used for various kinds of AI, from self-driving cars to AlphaGo (developed by DeepMind), enabling it to win its match against Fan Hui in 2015. The problem with machine learning is this: if the examples are biased, so the AI will be. According to Kate Crawford, biased data is one of the biggest problems in machine learning and cannot be left unchallenged. “Histories of discrimination can live on in digital platforms, and if they go unquestioned, they become part of the logic of everyday algorithmic systems.”[11]

The consequences of such bias on our lives are real, and could be dire. Amazon had to stop using a piece of software which helped them with recruitment when they realised that it showed a huge bias against women[12]. Similarly, Facebook’s algorithm was accused of influencing the US elections, and executives of the social network discussed the impact this influence might have had on the elections in a private meeting shortly after President Donald Trump was elected[13]. As if allowing biased technology to choose their employees or their leaders wasn’t bad enough, some countries also use risk assessment systems to decide who is most likely to commit a crime. Predictive policing is based on complicated machine learning algorithms, but a 2016 ProPublica study showed that those suffer from racial bias: “The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labelling them this way at almost twice the rate as white defendants. White defendants were mislabelled as low risk more often than black defendants.”[14]

Robots are gender-neutral, but their creators, designers, and users aren’t. What we make, be it a cultural or technological product, inherits our bias. By creating prejudiced robots and applying prejudice to our robots, we are building a world in which discrimination becomes ingrained and harder to root out. We can’t rely on AI to make unbiased choices for us. Above all, we must strive to fight against the assumption that robots are neutral. Discriminatory robots mustn’t serve as “proof” that it is right, under the assumption that a super-intelligent machine would “know”.

There are solutions to machine prejudice. A simple line of code can correct that bias – as long as we think to include it. AI creators can change the bias in future robots by bearing previous research results in mind. Deciding to pick a gender – or a nationality, an accent – for a robot needs to be a carefully weighed decision. The legal system itself can also offer solutions to AI bias. Copyright and privacy laws can encourage AI creators to prioritize the specific source material, by acting on accessibility and the legal risks attached to each piece of data. By modernizing the understanding of intellectual property, it might be possible to encourage AI creators to use data for machine learning in a much less biased fashion[15]. But it is unreasonable to rely solely on legal institutions and AI designers; the users also have their share of responsibility, as they impact the market through their buying choices. Everyone involved needs to strive to build fair, unbiased robots.

An archaic mythos is structuring our modern, fresh relationship with AI. This doesn’t mean that we can’t try to impact the narrative. The way we use robots is a way to forge old – or new – stories. We have the power to act, through our institutions, creations and buying choices. It is up to us to choose the story we want to tell.

ABOUT THE AUTHOR                                                            

Rebecca Zahabi is Manchester-based BAME writer, part of the Royal Exchange Young Company, the founder and president of a French theatre association based in Lyon, and the co-creator of the table-top game “Engrenages”. She holds an MA in Creative Writing from the University of Manchester and works as an editorial assistant for Zuntold.


[1] You can chat with her and decide for yourself here: https://pandorabots.com/mitsuku/

[2] http://www.bbc.com/autos/story/20160303-are-you-gps-gender-biased

[3] http://www.macdorman.com/kfm/press/press.php

[4] “L’Ève Future” also translated “Tomorrow’s Eve”

[5] Alphabet des Ben Sira, Translation Stern and Mirsky, (1998)

[6] The novel was adapted into two films (1975, 2004) with three television sequels and remakes. The term “Stepford Wife” has entered the Collins Dictionary as a generic term for a submissive wife, showing the book’s huge cultural impact.

[7] https://www.ibm.com/watson/

[8] https://rossintelligence.com/

[9] https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals

[10] You can see the results of his study, and have fun guessing which videos the Youtube algorithm favours on different subjects, here: https://algotransparency.org/

[11] https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html

[12] https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

[13] https://www.nytimes.com/2016/11/14/technology/facebook-is-said-to-question-its-influence-in-election.html

[14] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[15] How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem (2018) Amanda Levendowski, University of Washington, School of Law

Three Crows Magazine is a reader-funded publication and your support keeps us operational and independent to continue paying our authors for the best fiction and non-fiction possible. Even $1 helps keeps us afloat. Thank You!
Become a patron at Patreon!