logo
ADVERTISEMENT

The words you can't say on the internet

In a world where social media is the main source of news and information, it could mean there are ideas that some people never get to hear.

image
by BBC NEWS

World20 November 2025 - 14:23
ADVERTISEMENT

In Summary


  • People change their behaviour in response to beliefs about how social media algorithms work.
  • Whether or not these beliefs are correct, this user behaviour ends up moulding the algorithm itself.
Vocalize Pre-Player Loader

Audio By Vocalize

The words you can't say on the internet/BBC

There's a secret list of words you can't say on social media – at least, that's what everyone seems to think.

Perhaps you've noticed that people avoid certain words on social media.

They'll say "unalived" instead of "killed". Guns are "pew pews". Consenting adults have "seggs" with each other.

Social media users are the first to admit this makes them sound ridiculous.

But many think they don't have a choice.

Algospeak, as it's often called, is a whole coded language built around the idea that algorithms bury content that uses forbidden words or phrases, either to boost the political agendas of social media companies, or to sanitise our feeds for advertisers.

The tech industry swears this is all nonsense.

A YouTube spokesperson named Boot Bullwinkle explains it plainly.

"YouTube does not have a list of banned or restricted words," he tells the BBC.

"Our policies reflect our understanding that context matters and words can have different meanings and intent. The efficacy of this nuanced approach is evident from the diversity of topics, voices and perspectives seen across YouTube."

Meta and TikTok said the same thing: we never do this, it's a myth.

The truth, however, is more complicated. 

History is littered with examples of social media companies quietly manipulating what content rises and falls, sometimes in ways that contradict their claims about transparency and neutrality.

Even if it doesn't come down to individual words, experts say the tech giants do step in to subtly curb some material.

The problem is you never know why a post fails.

Did you say something that upset the algorithms, or did you just make a bad video?

The ambiguity has encouraged a widespread regime of self-censorship.

On one end of the spectrum, the result is people talking about serious subjects with goofy language. But at the extremes, some users who just want to go viral to avoid certain topics altogether.

In a world where social media is the main source of news and information for a growing share of the public, it could mean there are ideas that some people never get to hear.

The island man

Just ask Alex Pearlman. He's a content creator with millions of followers across TikTok, Instagram and YouTube who hang around for his comedy and biting political takes.

Pearlman says algorithmic censorship is a constant presence in his work.

"Just to start off with just TikTok alone, I rarely say the word ‘YouTube'. At least in my experience, if I'm looking at my analytics, if I say the phrase like, 'go to my YouTube channel', the video's going to [fail]," Pearlman says.

He isn't alone.

Experience has led Pearlman and other creators to assume TikTok doesn't want you sending people to a competitor and it will smack you down for suggesting it. (TikTok, by the way, says it doesn't do things like this.) 

But sometimes, Pearlman says, the examples are more unsettling.

Pearlman has made a lot of videos about Jeffrey Epstein, the late financier and sex offender at the centre of controversies around powerful figures from business and politics.

But last August, he noticed something strange.

"This was right around the time that Epstein stuff was blowing up everywhere," he says.

"Out of nowhere, I had multiple Epstein videos taken down on TikTok on a single day."

The same videos were untouched on Instagram and YouTube, but they'd broken some TikTok rule he couldn't identify.

"It's not like they come in and highlight the sentence that violated the guidelines. You're kind of left trying to discern what the black box is telling you."

Pearlman says his appeals were denied and TikTok left "strikes" on his account, which threaten your ability to make money on the app.

"Shortly after that, we started seeing less big-name accounts talking directly about Epstein as much," he says.

According to Pearlman, it seemed like other creators had similar problems and were trying to please the algorithms.

He didn't stop making Epstein videos, but Pearlman did try another strategy.

"I started speaking about him in coded language, calling him ‘the Island Man'," he says, in reference to Epstein's notorious private island.

"The problem with coded language is a large part of the audience won't know who you're talking about," Pearlman says.

I got on the phone with a TikTok spokesperson.

They didn't comment on Pearlman's Epstein problem and declined to speak on the record.

But they sent over some background information.

In short, TikTok says it is a misconception which doesn't reflect how its platform works.

TikTok, Meta and YouTube all say the algorithms that control your feed are complex, interconnected systems that use billions of data points to serve content you'll find relevant and satisfying – and all three publish information to explain how these systems work.

TikTok, for example, says it bases its recommendations on predicting the likelihood that each individual user will interact with a video.

The companies say they do remove or suppress posts, but only when that content violates clearly stated community guidelines, which are designed to balance safety with free expression.

TikTok, Meta and YouTube say they always notify users about these decisions, and they all regularly publish transparency reports with details about their moderation decisions.  

In practice, though, social media platforms have repeatedly meddled with which voices are amplified or buried, contradicting their rhetoric about openness and fair play, according to investigations by the BBC, advocacy groups, researchers and other news outlets.

Separate investigations by the BBC and Human Rights Watch found that Facebook and Instagram systematically restricted Palestinian users and content supporting Palestinian human rights in the weeks following the 7 October Hamas attacks in Israel.

A Meta spokesperson told the BBC the company makes "mistakes", but any implication that it deliberately suppressed particular voices is "unequivocally false".

In 2019, leaked documents showed that TikTok instructed moderators to suppress content from users who were "ugly", poor, disabled or LGBTQ+ because this material created a "less fancy and appealing" environment.

At the time, TikTok said the practice was a "blunt" anti-bullying measure that was no longer in place.

The same document leaks showed TikTok policies banned "controversial" live streams when users criticised governments, though TikTok said that policy was "not for the US market". 

In 2023, TikTok admitted it had a secret "heating" button it used to make hand-picked videos go viral, a tool that was reportedly used to court business partnerships and sometimes abused by employees.

TikTok did not answer my questions about whether this practice continues. 

"Well, if they've got a heater button, they have a cooler button," Pearlman says.

"It's a simple thought process." 

YouTube has faced similar controversies.

A group of LGBTQ+ creators sued YouTube in 2019, for example, claiming the company demonetised videos that contained words like "gay" or "trans".

The lawsuit was dismissed, and YouTube says it has never had policies that prohibit or demonetise LGBTQ+ related content. 

The music festival that doesn't exist

Social media companies do put their thumbs on the scale, and in some cases, they're happy to tell you about it.

TikTok, for example, has a number of webpages that explain its recommendation system in detail.

The company says it's dedicated to "maintaining content neutrality, or in other words, the recommendation system is designed to be inclusive of all communities and impartial to the content it recommends".

However, some videos are not created equal.

The company says your feed is also designed around "respecting local contexts and cultural norms" and "providing a safe experience for a broad audience, and in particular teens".

The problem is the policies governing social media are heavy handed and largely invisible, says Sarah T Roberts, a professor and director of the Center for Critical Internet Inquiry at the University of California, Los Angeles (UCLA).

People rarely know where the boundaries lie, Roberts says, or when the platforms quietly push some posts forward and others out of sight.

"It's an instrumentalisation of rules that at first blush, and even when one goes deeper, don't make any sense for regular people," she says.

"People come up with all sorts of folk theories in the context of that opacity."

According to Roberts, creating mechanisms to skirt various rules, real or imagined, just becomes part of the culture. It can lead things in an odd direction.

In August 2025, thousands of social media users went online to post about an exciting new music festival in Los Angeles.

People gushed about sets from Sabrina Carpenter and revelled in stories about the light shows.

But there was no festival. Carpenter wasn't performing. It was all a lie, and you were supposed to know that.

That month, mass demonstrations broke out across the US over raids by US Immigration and Customs Enforcement (ICE).

But online, many decided that tech companies were hiding the news.

The "music festival" was algospeak, a code word that erupted spontaneously and spread as people tried to communicate in thinly veiled language to fool the algorithms.

"We're in Los Angeles, California right now where a music festival is unfolding," content creator Johnny Palmadessa said in a TikTok video, emphasising the words as a wink to viewers.

A raucous crowd of protestors marched behind him, chanting and waving signs.

"Yes, we gotta call it a 'music festival' to ensure the algorithm shows you this beautiful concert," Palmadessa said in the video.

"Otherwise, we risk this video getting taken down."

 "Instead, the 'music festival' thing mostly started with people hypercorrecting because they weren't sure what the algorithm was and was not going to censor."

Here's the strangest part: there was no evidence that social media companies actually suppressed news of the protest, according to linguist Adam Aleksic, author of the book Algospeak: How Social Media Is Transforming the Future of Language.

"Sure, TikTok will prevent clusters of overly political content from clumping together, but they generally do allow protest coverage," Aleksik said in a video on the subject.

Ironically, using the term "music festival" drove people to engage with these videos because they wanted to feel like part of the in-group, which made the videos even more viral, according to Aleksik – and because the "music festival" videos were more popular than regular videos about the protest, it convinced people the censorship was real.

Researchers call this phenomenon the "algorithmic imaginary".

People change their behaviour in response to beliefs about how social media algorithms work.

Whether or not these beliefs are correct, this user behaviour ends up moulding the algorithm itself.

Is it all in our heads?

Algospeak is nothing new.

You can find plenty of videos about Epstein, Gaza and a long list of other controversial subjects.

And if TikTok really wanted to limit videos about murder, wouldn't it have suppressed the word "unalive" by now, too? 

"None of us know what works that doesn't. We are just throwing everything at the wall and seeing what sticks," says Ariana Jasmine Afshar, a popular content creator who focuses on left-wing activism.

That isn't to say social media companies don't play a major role in shaping the public discourse.

Between 2023 and 2025, Meta openly suppressed political content, before reversing the policy in a sweeping set of changes rolled out after US President Donald Trump's second inauguration.

During that time, it's conceivable that using effective sneaky language might have fooled an algorithm designed to bury your political takes.

Afshar was one of many who posted a video about the musical festival protests.

Did the code words make a difference? "I have no clue," she says.

There's no doubt in Afshar's mind that social media companies interfere with posts about controversial subjects.

She says she's experienced it firsthand, and in some cases, Afshar is certain that algospeak helped her evade censorship.

Then again, she recognises that her own success is evidence of the social media companies allowing that same political controversy to thrive.

Afshar says a representative from Instagram actually contacted her last year to congratulate her on her work, and offered strategies to do even better on the platform. (A spokesperson for Meta confirmed that Instagram gets in touch to help popular creators.) 

"It's a real thing," but it's hard to sort fact from fiction, Afshar says, and the whims of the tech giants are vague and constantly changing.

"They really confuse me, to be completely honest with you."

If you want to understand what's really going on, the key is grappling with what the social media companies are trying to achieve, according to Roberts, the UCLA professor.

It isn't really about politics, she says. It's about money.

Social media companies make their money from advertising.

Ultimately, that means their goal is to make apps that lots of people want to use, fill them with content that makes advertisers comfortable and do whatever is necessarily to prevent government regulators from getting in the way, Roberts says.

Every change to the algorithm and every content moderation decision comes down to that profit motive. 

The social media companies say the goal of their recommendation and moderation efforts is to create a safe and welcoming environment for their users.

"And it's true that most of the time, content moderation interests align with the best interests of the vast majority of users. But if and when they must deviate, they do," Roberts says.

"If people are dissatisfied with aspects of our civic life, is the best way to express that to just spiral out inside of platforms who are profiting off that dissatisfaction and frustration?" she says.

"I think we need to start reckoning, as a society, with the question of whether this is the best way for us to engage."

ADVERTISEMENT
ADVERTISEMENT