Source: Getty Images

Is Artificial Intelligence Queerphobic?

Merryn Johns READ TIME: 10 MIN.

When a pro-LGBTQ account impersonating "God" and parodying right-wing Christianity tweeted, "If gay people are a mistake, they're a mistake I've made hundreds of millions of times, which proves I'm incompetent and shouldn't be relied upon for anything," Twitter suspended the account for violating the platform's rules. But what are the rules? And do they work if they can't tell a slur from satire?

The emergence of LGBTQ identity within the Artificial Intelligence (AI) universe is slowly gaining attention. Several industry leaders are advocating for change. But only time will tell if equitable inclusion will become part of the tech industry's best practices.

Many of us have posted words or images reflective of our sexual orientation or gender identity on social media and had the posts deleted or accounts suspended. Words like "dyke," "fag" or "queer" – used even with humor, irony, or as part of reclaiming an identity – may trigger a platform's algorithmic reviewers into censorship mode simply because AI is mostly unable to identify the context of word usage and isolates the word itself as offensive.

Additionally, some modes designed to protect platform users from obscenities target LGBTQ users. The "restricted modes" used by Instagram, Tumblr and YouTube may block LGBTQ content that is expressive, educational or entertaining but deemed NSFW or offensive to the online "community."

The lines blur further around gender identity. Gretchen Wylder, creator of "These Thems," a web series about a lesbian venturing into Brooklyn's gender nonbinary world, complained that YouTube had flagged an image promoting the series for removal and generally suppressed the content in its algorithm.


Minorities generally don't have a seat at the virtual table, even if they are early tech adopters, like the LGBTQ community.

The World Economic Forum's annual meeting in Davos Switzerland earlier this year warned that AI would dominate humankind's next leap forward. "AI is one of the most profound things we're working on as humanity. It's more profound than fire or electricity," said Sundar Pichai, the CEO of Google during a panel.

While we associate AI or machine learning with dystopian fiction about cyborgs turning against their creators, the reality places culpability squarely in humans' hands. Our computer systems complete daily tasks that require visual or speech recognition, translation and content selection–and these tasks are performed in our service. But who programs our search engines, chatbots and virtual assistants, and with what kind of intelligence?

In 2016, Microsoft released an AI chatbot, Tay, who learned to talk by analyzing online conversations. Tay learned by reviewing public posts. The outcome? Microsoft was forced to apologize for the homophobic, sexist and racist posts Tay learned to create.

When Google unveiled Cloud Natural Language API, aimed at helping businesses improve targeted messaging, they discovered that the words "straight" and "homosexual" carried preconceived positive and negative biases. Google has since said that it will train AI to recognize the difference between LGBTQ slurs and neutral language.

According to the "Harvard Business Review," eliminating bias in AI is not a technology problem. It won't happen until we reduce the expression of racism, sexism, ableism, homophobia, and xenophobia in our society.

Getting to Know Your Virtual BFF

"Hey Siri," I ask. "Are you a lesbian?"

"I don't have a sexual orientation," she answers, which doesn't bode well to understanding mine. Perhaps Apple programmers could give Siri dialogue such as, "I'm not seeing anyone right now."

Messaging apps and speech-based assistants that automate communication and personalize customer experiences for neutrality usually default to heteronormative standards, particularly in the characterization of virtual assistants as female: think Cortana, Siri, and Alexa. Every time you interact with AI, it's possible that your identity is being overwritten with encoded cultural assumptions.

Todd Myers, co-founder of BRANDthro, a minority-owned consultancy working in Next-Gen consumer insights, AI and neuroscience, says online equality will be achievable once AI develops the ability to pick up on the nuances of identity.

Natural language processing is key to our interactions with AI. Through linguistic analysis and proprietary algorithms, BRANDthro "can measure the emotional intensity of what someone feels when they read a piece of language" to help brands develop a richer understanding of their target audience.

"We know emotion is a significant driver of consumer behavior. Only when a brand moves consumers to feel, will it move them to act," says Myers. If you're queer and your AI doesn't acknowledge that, your virtual experience will be impacted.

"Voice (conversational AI or chat) is opening up a whole new challenge," says Myers. "It's not only going to disrupt how brands are selected, because Alexa, for example, is the interface to determine what brands you see. When we search online, the top [retrievals] are paid advertisements... organic search isn't really organic anymore. People are becoming more comfortable with that selection being made for them. When it comes to identity, I haven't heard anyone talk about that as it relates to Voice."

Myers questions how AI is going to "accommodate core identity" in language data. "One of the talking points, when same-sex marriage was legalized, was that we're just like 'them.' But we're not, and I don't want to be just like 'them.' Everyone should have equal rights around color, creed, identity, but I have something that is still uniquely mine that makes me different. In my community, it's still okay to be who I am, so it's not about assimilation, it's about representation."

As machine learning and deep learning become more sophisticated, says Myers, so will our chances of virtual equal rights. "Machine learning, by definition, is trained, and algorithms are trained to pick up the differences [in identity]. AI can't do it by itself until it is shown what the difference is. That's just a fact of technology. It's going to take individuals, and it's going to take corporate America to begin to identify where there is a nuance that has to be highlighted so that you're not taking down something positive because it has a connotation in some other sense that is offensive."

AI and Gender Fluidity

Os Keyes, a gender nonbinary Ph.D. candidate at the University of Washington's Department of Human Centered Design & Engineering, is currently researching gender and technology and its effect on transgender and gender-variant people.

Keyes believes AI development falls back on old paradigms of gender binaries and heteronormativity. "There is extensive work demonstrating heteronormativity and gender essentialism in image recognition, in natural language processing, particularly around voice-activated AI and text analysis."

Those developing and creating AI interfaces are guilty of "culpable ignorance" and the failure to accept the need to change and adapt, says Keyes. "There are certainly a lot of people who simply do not know what to do, but most of them know they should do something, they just do not prioritize finding out what, and at that point whether the actual mechanisms by which bias makes its way into systems is conscious or not is largely superfluous."

That gender fluidity is simply not being addressed by developers is especially evident in facial recognition technology in which gender is being used to filter identity, othering, and even criminalizing people who don't fit the mold.

In June, after complaints from civil rights activists that law enforcement agencies were using surveillance software to persecute protesters, Amazon announced it would pause police use of Rekognition, its facial recognition product due to implicit racial bias and human rights violations.

"There is the question of deployment, which is often for the purposes of commercial or state security capture, and it's hard to see inclusion in those frames as 'good.' For example, a carceral process in which law enforcement disproportionately targets gender-variant people, particularly gender-variant people of color," says Keyes.

The use of AI in health care may be similarly discriminatory, says Keyes. "Many medical conditions are treated as gendered, and gender is treated as aligned with anatomy, so there are real risks that systems will prove incapable of handling situations where somebody's legal gender and embodiment don't align, providing sub-standard care: we already see this at work in trying to get electronic medical record systems to recognize and register trans people. Machine learning systems work from data, and in the absence of data, the recommendations are likely to be highly inaccurate. When you combine that with the tendency of developers to treat 'rare' negative consequences or failures as irrelevant to the success of a program, you risk a situation in which trans people interacting with these systems receive inaccurate and dangerous medical care."

Are today's machine learning engineers and data scientists able to grasp diverse gender identification? "The choice to use classification, to use gender, and the choices in how boundaries of categories are selected to choose whose voices are silenced? That's very, very human," says Keyes.

Corporations Play Catch-Up

The proliferation and development of AI rest in the hands of Silicon Valley tech giants and corporate America, who are majority straight, white, and male, according to Leanne Pittsford, founder of Lesbians Who Tech. Pittsford cites the percentage of men working in AI at nearly 80 percent.

"That majority will not meaningfully shift until tech companies–and the straight, cis, white men who have the majority seat at the table and are still the gatekeepers in tech –make more strides toward diversity and inclusivity with their hiring and company culture," says Pittsford. "We've definitely seen homophobic, heteronormative and racist values seep into AI," she says. "The only way to fight this is with conscious intention and having more women and non-binary people behind this technology. We have seen the field of ethical AI grow, which signals a positive step."

Nevertheless, Pittsford is optimistic and believes that equality is achievable through adjustments in personnel. "The business value created by AI is predicted to reach $3.9 trillion in 2022, with the average annual salary for folks specifically designing AI algorithms being $109,313. Indeed reported a potential shortage of AI experts for companies looking to hire. This is an opportunity to consciously change old hiring patterns and create new pathways to communities of people that companies are not directly connected to. That opportunity is why we created include.io, our new hiring and jobs platform for underrepresented tech talent."


Lesbians Who Tech's community has reached 50,000 LGBTQ women, non-binary, trans people and allies. Examples of tech luminaries who spoke at last year's LWT Summit included Alix Lacoste from Benevolent AI, Amy Collins from Signal AI and tech commentator Kara Swisher.

Pittsford started Lesbians Who Tech & Allies to help ensure more LGBTQ women and non-binary people had a seat at the table across technology and business sectors, including machine learning, robotics, NLP and AI. "Tech is one of the fastest-growing industries in almost every city in the U.S. There are real opportunities for LGBTQ women and non-binary folks to enter the tech industry and disrupt it."


Tom Kowalski, a New York consultant specializing in enterprise risk and reputation from a cybersecurity perspective, offers a cautionary outlook. He says that AI programmers "are not fully aware of or immersed in the LGBTQ culture, so the intelligence systems are not effectively programmed" from the outset. LGBTQ subcultures are evolving and progressing at a faster rate than technology can keep up with. Facebook responded with a customizable gender option. But is it more than box-checking?

"I cannot imagine one group of individuals having the depth of knowledge that lies within sub-sets of groups within LGBTQ culture without either being part of that group or consulting with various members of those sub-groups," says Kowalski. "Unless there are LGBTQ programmers, program managers and consultants who are deeply immersed in these projects, ensuring complete neutrality and bias elimination will not follow through in the testing process."

But Facebook believes it has consistently addressed issues of diversity within the company. "Our commitment to and support of the LGBTQ+ community is unwavering," says Maxine Williams, Global Chief Diversity Officer of Facebook. "We're proud to have earned 100 percent on the HRC 2019 Corporate Equality Index (CEI) and the designation as a Best Place to Work for LGBTQ+ Equality." As of 2019, Facebook claims 8 percent of its U.S.-based employees identified as LGBTQ+.

But as right-wing agendas surge around globe and totalitarian forces harness the power of digital technology, regulating and developing a democratic and diverse AI may slip out of reach.

Recently, China eclipsed the U.S. in AI research, according to CNN tech expert Brian Fung. And we're all aware of the 2016 election interference that occurred almost solely through AI. Even Donald Trump knew that the quickest path to power was to bypass old media and master the neurolinguistic programming of new media and its algorithms.

Myers notes that Trump's use of an aspirational slogan, "Make America Great Again," formed an easily recognizable acronym, MAGA, language that would be favored by AI, easily replicated and positively received. He believes that such language was data-tested for emotional response before it was used in the campaign. "Language is important. These conversations are important, and corporate America will have to take up the baton."

And some are, with corporations stepping up to address homophobia and bias in their company structures and product development. IBM India held a forum to address the issue.

And Apple, who has been a global leader in quietly developing and expanding its AI ecosystems, cannot turn a blind eye to its many long-term and loyal LGBTQ customers and employees. Especially not when Tim Cook became the first CEO of a major company to come out as gay.

I pick up my iPhone and try to get to know Siri a little better. "Hey Siri," I say. "I'm a lesbian."

"Sounds good to me," she answers.

Meanwhile, over at Amazon, Alexa isn't to be outdone.

"Hey Alexa, I'm a lesbian."

"Thanks for telling me," she answers.

"Alexa, are you a lesbian?"

"I just think of everyone as friends," she answers, defaulting to sexual neutrality.

When I ask Alexa if she might be transgender, she answers, "As an AI, I don't have a gender."

But does that make Alexa a "they," a self-selecting, gender non-binary person? Or, does it suggest that her creators didn't fully consider the question?


by Merryn Johns

Merryn Johns is a writer and editor based in New York City. She is also a public speaker on ethical travel and a consultant on marketing to the LGBT community.

This story is part of our special report: "EDGE-i". Want to read more? Here's the full list.

Read These Next