Google got rid of 'Smart Compose' pronouns because humans are sexist

Watch where you put that "him.'
By Rachel Kraus  on 
Google got rid of 'Smart Compose' pronouns because humans are sexist
Created in our own image. Truly. Credit: Keystone-France/Gamma-Rapho via Getty Images

Unlike a lot of email signatures these days, Gmail doesn't specify its preferred pronoun.

To avoid perpetuating gender bias, Gmail stopped its "Smart Compose" text prediction feature — which provides likely ends of sentences and other phrases for Gmail users while composing emails — from suggesting pronouns, Reuters reported Tuesday.

Google told Mashable that Smart Compose launched in May with that bias-averting policy already in place. However, Gmail product manager Paul Lambert only recently revealed this intentional move in interviews with Reuters.

Apparently, during product testing, a company researcher noticed that Smart Compose was assigning gendered pronouns in a way that mirrored some real-world gender bias: It automatically ascribed a "him" pronoun to a person only previously described as an "investor." In other words, it assumed that the investor — a role in a largely male-dominated field — was a man.

Studies show that in language, gender bias — or assuming someone's gender based on stereotypes or tendencies associated with men or women — has the power to both "perpetuate and reproduce" bias in the way people treat each other, and the way we think of ourselves.

"Gender-biased language is harmful because it limits all of us," Toni Van Pelt, the president of the National Organization for Women (NOW) said. "If a woman is using AI, and it refers to an engineer as a 'him,' it may get in her brain that only men make good engineers. It limits our scope of dreaming. That’s why it sets us back so far."

Gmail reportedly attempted several fixes for its own subtle gender bias, but none of them were perfect. So the Smart Compose architects decided the best solution was to remove pronoun suggestions altogether.

"At Google, we are actively researching unintended bias and mitigation strategies because we are committed to making products that work well for everyone," a Google spokesperson told Mashable over email. "We noticed the pronoun bias in January 2018 and took measures to counter it (as reported by Reuters) before launching Smart Compose to users in May 2018."

But an inherently sexist A.I. is not to blame for the potential gender bias within the algorithm. As with other A.I. tools, the gender bias at the root of Google's pronoun problem is a human one.

"Algorithms are reproducing the biases that we already have in our language," Calvin Lai, a Washington University in St. Louis professor and research director for the implicit bias research center Project Implicit told Mashable. "The algorithm doesn’t have a sense of what’s socially or morally acceptable."

Both Lai and Saska Mojsilovic, IBM's AI Science fellow specializing in algorithmic bias, explained that bias usually enters algorithms through the data algorithms learn from, also known as "training data."

Mojsilovic said, "Training data can reflect bias in some way shape or form, because as a society, this is what we generate."

Mashable Light Speed
Want more out-of-this world tech, space and science stories?
Sign up for Mashable's weekly Light Speed newsletter.
By signing up you agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

A Natural Language Generator (NLG) like Smart Compose learns how to "speak" by reading and replicating the words of humans. So if data contains overt or subconscious bias, expressed in language, then AI learning from that data will reproduce those tendencies.

Another sticking point is that bias in text generation is often difficult to articulate, and very dependent on context. And because the idea of bias and gender can be more interpretive or subjective, it can be harder to teach a machine to recognize and eradicate it.

"For us, as scientists and researchers, text is a more difficult category to master than other data types," Mojsilovic said. "Because text is fluid, and it's very hard to define what it means to be biased."

"A lot of times we think about gender bias in an old-school explicit way," Lai said. "But a lot of it happens much more subtly, on the basic assumptions that we have of other people."

Google is aware of the challenges that arise from training data. The company confirmed that it tests its algorithm training data for bias before deploying it. This is a continual process.

"As language understanding models use billions of common phrases and sentences to automatically learn about the world, it can also reflect human cognitive biases by default," a Google spokesperson told Mashable over email. "Being aware of this is a good start, and the conversation around how to handle it is ongoing."

Moreover, Gmail's Smart Compose provides its own set of challenges beyond other NLG tools. At the launch of Smart Compose predecessor Smart Reply, Google wrote that its NLG tools learn from and tailor its suggestions to individual Gmail users. So even if the algorithm was trained on data tested for bias, the very real and flawed humans it continues to learn from may have prejudices that they subconsciously express through text.

"They’re ultimately based on how people are using the language," Lai said. "And sometimes that might reflect something accurate about the world. And sometimes it might not."

At this point, removing pronoun suggestions may be the best option to avoid gender bias, or to avoid prescribing a pronoun that doesn't match someone's gender identity. NOW's Toni Van Pelt applauds the decision, and sees sensitivity around pronouns as an admirable move for an industry leader like Google.

"I think it’s really important that they were aware of their prejudice, they were aware of their bias, and did the right thing in being conservative in eliminating this," Van Pelt said. "They are leading by example for the other AI companies."

But it's also a temporary fix to the pervasive problem of making sure AI doesn't reflect and enhance our own biases.

"It leaves it up to the user to make up their own minds, rather than put the responsibility on the algorithm’s shoulders," Lai said. "That seems to be one way to absolve or remain a neutral party."

This is a problem Google is proactively working on. The company has released multiple studies, tools, and other initiatives to help developers eradicate bias. And it's working to define a criteria "fairness," which is a prerequisite for getting rid of bias from AI NLG tools in the first place.

Other researchers are also leading the way. IBM has built a tool anyone can use to assess training data. Lai's consortium Project Implicit studies the phenomenon of and potential preventions for implicit bias. (You can see some of their work here). And, crucially, hiring a diverse workforce — one that reflects the real world — is paramount to creating equitable and moral AI.

"We hold these algorithms, perhaps rightfully so, to a higher standard than we hold every day people," Lai said. "There is a vested interest in terms of our society’s values and morals to be gender neutral in many of these cases."

The silver lining: The extent to which these biases are so deeply engrained in our collective language is coming to the fore because of the development of AI. Recognizing bias as we build these tools provides the opportunity to help correct it.

"We are living in a world that is full of biases, the biases we created as humans," Mojsilovic said. "If we are really diligent about it, think about the outcome that we can end up with the technology that can actually be better than us, or help us be better, because it will teach us or point out what we ourselves might have missed."

Mashable Image
Rachel Kraus

Rachel Kraus is a Mashable Tech Reporter specializing in health and wellness. She is an LA native, NYU j-school graduate, and writes cultural commentary across the internetz.


Recommended For You
The TikTok ban is law. Here's what happens next.
hand holding phone showing tiktok logo


Yes, 'You wouldn't last an hour in the asylum where they raised me' is a Taylor Swift lyric
Taylor Swift performing in a white dress surrounded by back up dancers in black outfits creating a haunting image.

Look out Substack, Ghost will join the fediverse this year
The Ghost logo.


More in Tech




TikTok for Business: Everything you need to know
TikTok for Business

Trending on Mashable
NYT Connections today: See hints and answers for April 24
A phone displaying the New York Times game 'Connections.'

Wordle today: Here's the answer and hints for April 24
a phone displaying Wordle

NYT's The Mini crossword answers for April 24
Closeup view of crossword puzzle clues


Wordle today: Here's the answer and hints for April 25
a phone displaying Wordle
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!