Gender Representation + Bias in AI. Why Voice Assistants Are Female

welocalize August 16, 2021

Have you ever noticed how Siri, Alexa, Cortana, Google Assistant, and other voice assistants have female voices?

This form of AI is increasingly a part of our daily life. Almost 3 billion people are currently using voice automated software.

We Prefer Female Voices

There are an overwhelming number of studies that suggest humans prefer the sound of a female voice, and some even theorize our preference for female voices begins when we are fetuses, as these sounds would soothe and calm us in the womb.

Other research found that women tend to articulate vowel sounds more clearly, which makes women easier to understand, particularly in the workplace. This is not something new for the industry.

There’s No Data for Male Voices

This is probably the most argued point for programmers when they begin creating voice automated AI.

Over the years, text-to-speech systems have been predominantly trained on female voices. Because we have such rich data for female voices, companies are more likely to opt for them when creating voice automated software as it’s the most time- and cost-efficient solution.

Female voice recordings date back to 1878 when Emma Nutt became the first woman to be a telephone operator. Her voice was so well received, she became the standard all other companies strived to emulate. By the end of the 1880s, telephone operators were exclusively female.

Because of this gender switch in the industry, we now have hundreds of years of female audio recordings we can use to create new forms of voice automated AI that we know users will respond well to.

Why waste your time and money collecting male voice recordings and creating male-voiced AI, when you don’t know how your users will respond to it. If there are challenges creating male voice automation, it makes sense to use female audio recordings.

What Can We Do to Tackle the Problematic Gender Representation in AI? 

In the 1990s, Stanford researchers Byron Reeves and Clifford Nass discovered that while people interacted with male- and female-voiced machines differently, they treated these computerized voices with mutual respect. Fast forward to 2021, and you’ll see this mutual respect has disappeared. AI developers have to take into consideration gender bias when developing applications and how voice assistants (mainly female voices) respond to abusive language.

Siri, Google Assistant, and Alexa have drastically altered their responses to negative language.  When faced with an offensive comment, they will reply ‘I won’t respond to that’ or simply ‘no’.

We only need to go back as far as 2017 and you’ll see how some voice assistants would respond to sexual or threatening comments. Many Englishspeaking voice assistants would reply to a rude question in a jokey or passive way. These responses play into the stereotype that women should accept sexually explicit and rude language from others. 

It’s so important that these AI assistants respond to these sorts of comments in a way that dismantles archaic gender stereotypes and reflects the norms and values of society today. It’s great to see we’re moving in the right direction.

We can still do more to tackle this global issue, and there are ways we can tackle this bias not just in voice automation, but the whole AI industry itself.

1. Implement AI Standards

The global market value of AI is expected to reach $267 billion by 2027. AI and voice assistants, in particular, are becoming an integral part of our daily lives, so much so 41% of users feel that their voice-automated software is like another person or friend.

Yet there are no industry-wide guidelines based on the humanization of AI.

Most tech companies still opt for a female voice and/or a female name, which can enforce the outdated gender stereotype that women are here to serve.

It makes sense to implement industry-wide standards based on how gender is portrayed throughout AI. This will lead to a more diverse range of AI apps of different genders, sexual orientation, and race and how they deal with sexual and/more harmful comments.

2. Create an AI Inclusive Industry

According to the World Economic Forum, women make up just 22% of global AI jobs, while men account for 78%.

To balance out this percentage we need to encourage more women and other genders to pursue a career in AI and data.

If AI development teams are more diverse, the workforce are better placed to address complex gender issues including how to respond to sexual abusive or aggressive language before new voice assistants are released to the public.

3. Invest in Machine Learning Technology

With new machine learning technology at our disposal, text-to-speech systems are becoming more advanced and are now more able to create naturalistic male and female voices for AI.

 

Voice assistants are and might always be a part of our daily lives, and because of this, we need to address the gender bias surrounding this form of AI and how they are programmed to respond to sexist language.

By engaging in open discussions like this and by encouraging more women into a career in AI, we can work towards creating more inclusive assistants that reflect our society’s gender norms and values.

 

To hear more, watch this joint Women in Localization and Welocalize on-demand webinar,  The Women Shaping AI and Data in the Language Industry. This unique session features Carrie Fischer, Director of Localization at Subway, and Welocalize experts Olga Beregovaya, VP of AI Innovation, and Tiarne Hawkins, AI Services Director.

You might also like…

Welocalize Guide: Your Guide to Conversational AI 

On-Demand Webinar: Using AI to Protect Your Global Brand. Tackling Multilingual Non-Inclusive and Sensitive Content on the Internet

On-Demand Webinar: Enabling Diversity in Patient Recruitment + Removing Bias

For more information on Welocalize AI Services, click here.