Queer social media experts weigh in on AI content moderation

JL Odom READ TIME: 4 MIN.

Social media platforms commonly use artificial intelligence for content moderation, with the AI software itself relying on algorithms to screen content posted by social media users. Ultimately, the AI determines whether content adheres to or breaches platform guidelines.

But issues can arise when a social media company loosens its restrictions on content, or if the algorithms themselves – the building blocks of AI content moderation – are biased.

Social media experts Jenni Olson, GLAAD’s senior director of its social media safety program, and Andrea “Annie” Brown, founder and CEO of the ethical AI company Reliabl, shared their perspectives.

Meta’s revised stance on moderating content
In early January, Meta Platforms Inc., the parent company of Facebook, Instagram, Threads, and WhatsApp, announced policy changes in its “Community Guidelines.”

Among the instated changes are a reduction of AI content moderation and a corresponding shift to “Community Notes,” a volunteer-based program comparable to that used in the platform X, through which social media users self-report content violations.

“One thing Meta has now said is that they are not going to be doing as much moderation. They're going to focus on more extreme things, like CSAM [child sex abuse material] and violence, proactively, but everything else they are basically going to not moderate,” said Olson in a Zoom video chat with the Bay Area Reporter.

This change, coupled with the company’s revamped “Hateful Conduct” policy that now permits anti-LGBTQ terminology and discourse and the dissolving of its fact-checking program, make for one giant red flag for queer individuals.

“The Meta stuff is so bad. It’s just bad upon bad upon bad,” said Olson.

Meta’s reduced reliance on AI for content moderation, according to the company’s CEO and founder Mark Zuckerberg, is a necessary measure to lessen flagging posts and speech that do not justify review and removal.

“The problem with complex systems is that they make mistakes. Even if they accidentally censor just 1% of posts, that’s millions of people. And we’ve reached a point where it’s just too many mistakes and too much censorship,” he said in a video featured in the Meta-released article, “More Free Speech and Fewer Mistakes.”

The piece, authored by Joel Kaplan, Meta’s chief global affairs officer, further details the company’s reasoning behind its content moderation policy change.

“In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content. This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable,” Kaplan wrote.

Brown, whose company Reliabl provides platforms with community-centric data management solutions and custom, high-performance algorithms, sees Meta’s content moderation reduction as a poor excuse for a strategy. She emphasized the need to train the AI with an understanding of the context of a community, which is a focus of her company.

“At Reliabl, we essentially bring marginalized communities into the machine learning pipeline, because that's what's missing from this AI,” Brown explained. “Basically all these moderation rules, all these moderation algorithms, are written primarily by white, straight cis men. And so that's the perspective that they're going to have. And then they wonder why they can't work in different contexts. So there's a laziness factor there.”

Gleaning insight from research
Olson, a lesbian who lives in the Bay Area, oversees GLAAD’s Social Media Safety Index, an annual report on the current climate in social media platforms for the LGBTQ community and the platforms’ policies.

The SMSI, the first of which was published in 2021, provides insight into LGBTQ+ discrimination and harassment, restrictions on self-expression, and privacy issues on six major social media companies – Facebook, Threads, Instagram, TikTok, YouTube, and X – and rates them using a “platform scorecard.” In the “2024 Social Media Safety Index,” all but one of the companies received an “F” grade. (TikTok earned a D+.)

The 2024 SMSI includes two comprehensive pages covering issues such as generative AI systems, automated gender recognition, and content moderation titled “Focus on AI: Risks for LGBTQ People.”

As noted in the section, a major risk when it comes to using AI is the potential for algorithmic bias, which occurs as a result of systematic errors in machine learning algorithms.

“One of the most significant things in terms of LGBT people and other historically marginalized groups is, garbage in garbage out, bias in bias out. If you don't have LGBT people, people of color at the table giving the inputs to the AI systems, then they will be racist and homophobic and so on,” said Olson.

Algorithmic bias in social media platforms’ content moderation systems can result in the disproportionate flagging and takedowns of content; it can also result in discrimination and reinforce stereotypes.

Essentially, the data used for the algorithms have a substantial impact on the presence of bias. If the data are not representative of a certain group, such as LGBTQ people, then biases surrounding that group perpetuate.

“How well companies are actually doing that [i.e. using community-centered data] or not is a huge question. And of course, there's all kinds of examples of ‘They're not doing well’ and really horrible examples of bias,” Olson added.

Brown, who’s spent the past six-plus years at UC San Diego researching algorithmic bias, namely in social media spaces, said that if a company is looking to improve AI, the exclusion of certain viewpoints and datasets isn’t the answer.

“You’re literally just making the AI worse, because at the end of the day, AI’s main goals are accuracy and efficiency, and if you want an accurate AI, that means including as many diverse viewpoints as possible,” she said.

“All that is doing is just setting the technology further and further back, because you can't have an AI that understands human context if you don't help it understand the full spectrum of human reality. Even for the companies who don't necessarily care about inclusion and bias, by making their data pipeline and their training process more inclusive, they'll actually make their technology better,” she added.

Brown, who is genderfluid and pansexual, described herself as a big fan of GLAAD’s work and as someone who’s done a considerable amount of activism around LGBTQ inclusion in online spaces. She’s assisted GLAAD with some of its research on content moderation.

“The Safety Index and my organization, Reliabl, have kind of been doing parallel research for a little while now, and now we're joining forces, which is cool. We're also working on some case studies on social media platforms who haven't totally abandoned eliminating hate speech or more inclusive content moderation policies. [We] may be working with GLAAD in the future on that as well,” she shared.

Brown’s also the creator of Lips, a social media platform for the LGBTQ community. Launched in 2019, Lips is still up and running, with an estimated 50,000 users.

“It kind of just operates as its own little space on the internet for people to feel like they can be themselves,” explained Brown.

Via Lip’s development, she obtained a firsthand understanding of social media moderation in terms of what it looks like and what's possible.

“My conclusion I came to was that these big platforms make it seem like it's harder than it is. They just don't care enough is the problem,” she said.

Furthering ethical AI
Researchers’ and companies’ ongoing advocacy for, development of, and application of AI content moderation systems grounded in transparency, fairness, and diverse datasets – i.e. ethical AI – are sources of hope.

As Brown commented, “Social media can be a very positive thing, but unfortunately, [an initial] social media platform was built by a guy who is very misogynistic to rate the hotness of women on his campus [a reference to Zuckerberg’s “FaceMash” website, which predates Facebook]. That's not a good foundation to start any business from.”

“With AI, it's really about who builds the tools, and we really do have to build our own,” she said.

This story is part of the Digital Equity Local Voices Fellowship lab through News is Out. The lab initiative is made possible with support from Comcast NBCUniversal.


by JL Odom

Read These Next