Online communities exposed: what insiders won't admit
The Illusion of Connection: Online Communities as Corporate Surveillance Factories
We’re told they’re for connection. For finding your tribe. For sharing recipes, memes, and mutual aid. But scratch the surface of any major online community and what you find isn’t belonging—it’s extraction. Every laugh, every tear, every late-night rant is mined, modeled, and monetized. The dream of digital democracy has been hijacked by algorithms designed not to serve people, but to predict and profit from their most volatile emotions. This isn’t an accident. It’s the business model.
Online communities aren’t broken. They’re working exactly as intended. And the cost is being paid in mental health, democratic stability, and the erosion of trust—while the architects walk away richer.
Outrage Isn’t a Bug—It’s the Feature
Let’s be clear: platforms don’t stumble into amplifying harmful content. They engineer for it. Internal research from Meta and TikTok, leaked by whistleblowers and confirmed by the BBC, showed that outrage drives engagement like rocket fuel. Anger keeps eyes on screens. Fear increases scroll time. Sadness? Also profitable, if it leads to doomscrolling. Joy? Less reliable. It doesn’t loop back as predictably.
When Facebook’s own 2020 internal review found that its algorithm rewarded divisive, inflammatory posts—not because users demanded them, but because the system was optimized for “time spent”—the company didn’t pivot. It doubled down. Why? Because outrage is sticky. It’s addictive. And it’s free labor: users generate the content, the platform sells the attention, and advertisers pay premiums to reach people in heightened emotional states.
This isn’t speculation. It’s documented. Whistleblower Frances Haugen testified before Congress that Meta knew Instagram worsened body image issues for one in three teen girls—and did nothing meaningful to change it. TikTok’s internal studies, similarly suppressed, showed how quickly harmful challenges and self-harm content could go viral when paired with trending audio. The pattern isn’t negligence. It’s prioritization: growth over safety, engagement over ethics.
And who bears the brunt? Not the executives in Menlo Park or Singapore. It’s teenagers navigating identity in a unhouse mirror. It’s refugees seeing their trauma turned into clickbait. Its voters bombarded with lies engineered to trigger rage before facts can catch up.
The Misinformation Machine Feeds on Vulnerability
We love to blame “bad actors” for misinformation. But the real enabler isn’t just trolls in basements—it’s the architecture of attention itself. A 2025 systematic review in Frontiers in Communication confirmed what many suspected: vaccine hesitancy didn’t spread because people are irrational. It spread because anti-vaccine narratives were algorithmically amplified in communities with low media literacy and limited access to trusted sources.
Think about that. The same systems that recommend your next yoga video also push conspiracy theories to someone recovering from illness, isolated, searching for answers. The endemic didn’t emerge from nowhere. It was fed by design choices that prioritize vitality over veracity. When a grieving parent searches for answers about their child’s sudden illness, the algorithm doesn’t care if the top result is a peer-reviewed study or a video claiming vaccines contain microchips. It cares which one keeps them watching.
And it works. The Frontiers in Computer Science study noted that young adults in enclosed online spaces—like gaming voice chats or niche Discord servers—are especially vulnerable. In these echo chambers, emotions intensify. Opinions harden. Radicalization doesn’t require a charismatic leader; it just needs a feedback loop where dissent is punished and conformity is rewarded with belonging.
This isn’t free speech. It’s *engineered susceptibility×.
Who Profits? Follow the Data Trail.
Let’s talk money. In 2023, Meta generated over $134 billion in revenue—98% from advertising. TikTok’s parent company, ByteDance, pulled in roughly $120 billion globally, overwhelmingly from ads. These aren’t tech companies. They’re attention brokerages dressed in hoodies and mission statements.
Their customers aren’t you. They’re the advertisers paying to reach you during your most emotionally vulnerable moments: after a breakup, during a job search, while grieving, or lying awake at 2 a.m. scrolling for distraction. Your anxiety isn’t a side effect—it’s the product.
And the returns are staggering. A 2022 study by the Center for Countering Digital Hate found that just twelve anti-vaxxers were responsible for up to 65% of anti-vaccine content on social media—content that reached millions, thanks to algorithmic boosting. Yet platforms claimed helplessness. “We can’t catch everything,” they said. But when it comes to copyright infringement or terrorism content, they deploy AI filters at scale. The difference? Those violate their terms. Harmful misinformation often boosts engagement—so it stays up, until public pressure forces a temporary, performative takedown.
This isn’t neutrality. Its complicity dressed as innovation.
The Lies We’re Sold About “Community Moderation”
Platforms love to tell us they’re hiring thousands of moderators. They shout about AI improvements. They release transparency reports full of metrics that look impressive until you ask: *What’s the baseline? What’s the trend? And who’s really being protected?
The truth is grim. Moderators—often outsourced, underpaid, and traumatized—are expected to review thousands of disturbing posts per day for wages that barely cover rent in high-cost areas. Many develop PTSD. Turnover is brutal. And despite their labor, harmful content still slips through—not because they’re failing, but because the volume is designed to overwhelm.
Meanwhile, platforms push the myth that “community guidelines” are enough. But guidelines mean nothing when enforcement is reactive, inconsistent, and geared toward avoiding bad PR—not protecting users. A racial slur might get flagged. A coordinated disinformation campaign targeting an election? Typically left up until after the vote.
And let’s not forget the real loophole: private groups. Encrypted chats, invite-only Discords, closed Facebook groups—these are where the most dangerous organizing happens. From extremist militias to anti-trans hate rings, these spaces fly under the radar because they’re “not public.” But that’s a fiction. If it’s online, it’s traceable. The platforms choose not to look—because monitoring private groups at scale would slow growth, increase costs, and threaten the illusion of “privacy” they sell to users while mining their data.
This Isn’t Just About Tech—It’s About Power
We don’t talk enough about who benefits from a distracted, outraged, divided public. When people are busy fighting over manufactured culture wars, they’re not organizing for living wages. When they’re doomscrolling through climate doom, they’re less likely to demand systemic action. When they’re isolated in algorithmic bubbles, solidarity becomes harder to build.
The status quo serves those who profit from instability: speculators, monopolists, and politicians who thrive on chaos. A populace that’s anxious, misinformed, and emotionally exhausted is easier to manage. It’s harder to unionize. Harder to protest. Harder to imagine a world where public goods come before private gain.
And let’s kill the myth that regulation stifles innovation. The real innovation suppression happens when startups can’t compete because incumbents buy or bury them. When venture capital funds surveillance-as-a-service models instead of public-interest tech. When the most profitable apps are the ones that exploit human psychology worst.
Real innovation would look like platforms designed for diminishing returns: less time spent, better information quality, stronger community bonds. But that’s not what shareholders want. They want growth. They want addiction. Furthermore, they want your attention, now and forever.
What Would a Public Internet Look Like?
Imagine if the digital commons were treated like, well, a common. Not a fiefdom ruled by unaccountable CEOs, but a public utility governed by democratic principles—like the post office or public broadcasting.
What if algorithms were transparent and auditable? What if engagement wasn’t the default metric, but well-being was? What if platforms were required to conduct independent impact assessments before rolling out new features—like drug trials for the mind?
What if we treated data not as oil to be extracted, but as a collective resource—with rights, consent, and compensation built in?
None of this is utopian. It’s achievable. The EU’s Digital Services Act and AI Act are early, flawed steps. But they show that regulation isn’t the enemy of progress—it’s the guardrail against regression.
And let’s not forget the power of collective action. Workers at Google, Amazon, and Microsoft have walked out over contracts with ICE and military AI. Moderators have unionized. Users have deleted apps in protest. Change doesn’t come from waiting for benevolent billionaires. It comes from pressure—organized, sustained, and unafraid.
The Choice Isn’t Engagement or Safety—It’s Who We Serve
We’ve been sold a false dilemma: that we must choose between free expression and harm reduction. But that’s a distraction. The real choice is between a digital infrastructure designed to extract profit from human vulnerability—and one designed to serve human needs.
We don’t need more “innovation” in outrage optimization. We require courage. Furthermore, we need to admit that the tools we built to connect us are now being used to fray the social fabric—and that we allowed it because it was profitable.
The good news? We built this. We can rebuild it.
But first, we have to stop pretending the problem is “bad users” or “trolls.” The problem is the incentive structure. The issue is the concentration of power in unaccountable hands. The issue is that we’ve confused connection with consumption.
It’s time to demand better. Not just for ourselves—but for the next generation, who deserve a digital world that doesn’t profit from their pain.
Sources
— Meta and TikTok let harmful content rise after evidence outrage drove engagement — whistleblowers — Frontiers | Unravelling the endemic: a systematic review of misinformation dynamics during the COVID-19 pandemic — Frontiers | Understanding and responding to complex online harms: misinformation, fake news, and young adults
Comments
Comment Guidelines
By posting a comment, you agree to our Terms of Use. Please keep comments respectful and on-topic.
Prohibited: Spam, harassment, hate speech, illegal content, copyright violations, or personal attacks. We reserve the right to moderate or remove comments at our discretion. Read full comment policy
Leave a Comment