It’s time we banned anonymity online

For years, we defended online anonymity for the sake of dissent and protection. Today, the harm it enables far outweighs whatever virtue remains in the status quo. It’s time for a reckoning.

FILE PHOTO: Reddit app is seen on a smartphone in this illustration taken, July 13, 2021. REUTERS/Dado Ruvic/Illustration/File Photo
REUTERS
Share this article

In the dial-up era, anonymity felt almost noble: it shielded queer teenagers testing fragile identities, whistle-blowers leaking corruption, and Chinese dissidents dodging censors. Thirty years later, that shield has become a blunt weapon wielded by the worst actors while the vulnerable it once protected remain as exposed as ever.

In 2024, economists Florian Ederer, Paul Goldsmith-Pinkham, and Kyle Jensen scraped 1.7 million posts from the anonymous Economics Job Market Rumours board and found them far more toxic than 95 percent of Reddit’s largest sub-forums. A Tulane analysis adds the commercial twist: each outrage-drenched comment drags offended readers back within the hour, extending their scroll time and platform revenue. 

Platforms have discovered that anonymous vitriol is a feature, not a bug.

Local Singapore forum threads prove the pattern holds globally. Behind cartoon avatars, strangers sling slurs at migrant workers or doxx schoolteachers, secure in the knowledge that a fresh handle costs nothing. 

The harm bleeds offline with devastating consistency: employers Google candidates and find fabricated character assassinations, teenagers nurse self-harm ideation after anonymous harassment campaigns, and reputations warp under accusations that victims can neither refute nor trace.

The cost of hiding our names

Immanuel Kant warned that public reason rests on the courage to place one’s name beneath one’s speech. For Kant, this wasn’t merely about accountability — it was about the very possibility of rational discourse. In 2025, we’ve outsourced that courage to underpaid moderators and automated filters, then wondered why the civic commons feels scorched.

An MIT study tracking 126,000 Twitter cascades showed falsehoods travel up to six times faster than verified facts. False news stories were 70 percent more likely to be retweeted than true stories, particularly for political content. Outrage is simply more contagious than accuracy, and anonymity lowers the cost of lighting the match while removing any incentive to verify the fuel.

This creates what philosophers call an “epistemic crisis” — a breakdown in our collective ability to distinguish truth from falsehood. When anonymous accounts can manufacture consensus through bot networks and coordinated inauthentic behavior, the very foundation of democratic decision-making erodes.

LinkedIn offers a compelling counterfactual to anonymous chaos. Roughly one-fifth of its 800 million users hold verified badges, and the company’s transparency report credits real-name friction for blocking 94.6 percent of fake accounts before they reach feeds. Independent risk firm Fortra ranks LinkedIn’s threat exposure for disinformation and harassment as dramatically lower than Facebook or X for comparable user bases.

And while critics sneer at the site’s humble-brag culture and motivational syrup, those social norms — politeness, fact-checking, disclosure of conflicts — make substantive discourse possible. When liability shadows every keystroke, civility becomes rational. Users fact-check each other’s claims, provide source citations, and engage in disagreement without personal attacks.

LinkedIn commands premium advertising rates precisely because brands trust their messages will appear in environments free from hate speech and conspiracy theories. Advertisers pay 40-60 percent more for LinkedIn placements compared to equivalent demographics on Facebook or X. Quality trumps quantity when reputation is on the line.

Accountability goes global

The commercial value of trust, as exemplified by LinkedIn, has not gone unnoticed by policymakers. As platforms struggle — or refuse — to police themselves, regulators worldwide are stepping in, recognising that anonymous impunity threatens democratic discourse.

South Korea enforces real-name log-ins during election periods, demonstrably reducing election misinformation and foreign interference. The EU’s Digital Services Act imposes strict traceability requirements for illegal content, with fines reaching six percent of global turnover. Spain’s prime minister has called for a continental ban on online anonymity to safeguard democracy and children.

These aren’t authoritarian overreaches but recognition that speech affecting others cannot remain consequence-free indefinitely. The regulatory momentum reflects growing consensus that platforms’ current approach — privatising engagement benefits while socialising harm costs — is unsustainable in democratic societies.

On the other spectrum, civil-liberties lawyers argue that whistle-blowers, domestic-abuse survivors, and political dissidents still need protection from retaliation. Their concerns are legitimate, and any regulatory solution must avoid replicating authoritarian overreach. Singapore’s approach offers a template for how this calibration can work in practice, balancing privacy with accountability for harmful speech.

Under the Protection from Harassment Act (POHA), victims can petition the specialised Protection from Harassment Court for “disclosure orders” — judicial determinations that harm from anonymous speech outweighs the speaker’s privacy interests. Judges then direct Internet intermediaries to unmask anonymous accounts so proceedings can begin. Applications have doubled since the court opened in 2021, mostly involving cyber-bullying and doxxing.

This surge in cases signals a wider acceptance of a system that doesn’t eliminate anonymity, but redefines its boundaries — it’s regulated pseudonymity in action. Whistle-blowers may still post under handles when exposing wrongdoing, but a trusted custodian holds the real-name key, released only under judicial warrant. 

German-American historian and philosopher Hannah Arendt would recognise the democratic bargain: the private self stays private, yet the public sphere is no longer defenseless. Privacy survives where it serves legitimate interests; impunity evaporates where it enables systematic harm.

Why the market won’t self-correct

But if the law seeks balance, the market runs on something else entirely. Even as courts and philosophers map the ethical boundaries of speech and privacy, tech platforms operate on a different calculus: profit.

Platforms themselves have little incentive to change because their business models directly profit from the chaos that anonymous abuse creates. Advertising markets trade in attention-seconds, and conflict is the cheapest accelerant available. Posts laced with anger spike dwell time far more reliably than neutral updates, boosting ad exposure and revenue.

Meta’s January 2025 pivot made this arithmetic explicit: the company eliminated third-party fact-checkers and loosened content moderation rules, explicitly choosing engagement over accuracy. Internal research showed executives knew these changes would increase misinformation and harassment while boosting time-on-platform metrics. Rage prints money; restraint erodes it.

This creates a collective action problem — individual platforms cannot unilaterally implement stricter identity verification without losing users to competitors who maintain permissive policies. Market failure of this magnitude requires regulatory intervention to level the playing field and align private incentives with public interests.

Accountability without censorship

Market failure leaves legislators as the only actors capable of forcing systematic change. Every account with the ability to broadcast to the public must be tethered to a verified identity, with identity checks completed before the first keystroke appears on-screen. This isn’t about censoring ideas but ensuring that speech carries appropriate consequences for its effects on others.

A durable verification emblem should accompany each profile, signaling that legal liability shadows every post. This transparency allows readers to weigh source credibility while maintaining space for legitimate dissent. The goal is not eliminating disagreement but ensuring disagreement occurs between accountable actors operating in good faith.

Because access to verified identity data creates enormous potential for abuse, the law must impose severe penalties on custodians who mishandle information. Any leak, sale, or unauthorised reuse of user credentials should trigger fines that meaningfully impact shareholder value and, where warranted, criminal charges. The system cannot work if users cannot trust that their identity information will be protected.

The economic case extends beyond reduced moderation costs. LinkedIn’s premium advertising rates demonstrate that brands will pay substantially more to reach audiences in trustworthy information environments. An enforced identity layer would universalise that trust dividend, creating new revenue streams while reducing social costs of misinformation and harassment.

Independent analysis suggests online misinformation costs the global economy $78-127 billion annually through reduced institutional trust, increased polarisation, and direct harms from medical misinformation and financial scams. A verified identity system would internalise many of these costs. In twenty years, we have tested every content moderation approach except meaningful accountability; the experiment in consequence-free speech has failed catastrophically.

Character in a verified age

But regulatory fixes alone are not enough. After we repair the institutional incentives, we must address the character of digital citizenship itself. Once real-name guardrails are established, the deeper work shifts to the people behind those newly verified profiles. 

Aristotle located civic health in phronesis — practical wisdom — because wise actors pause to weigh ends against means before they speak. Identity enforcement thickens that pause, making rashness personally costly when one’s reputation hovers beside each post.

With accountability restored, reciprocity becomes rational self-interest. Each participant invests in civility because every other participant must do the same, creating positive feedback loops that strengthen over time. This foundation makes further cultural improvements possible: digital literacy curricula that treat careful reading as civic duty, and editorial norms that value sourced argument over volume.

Platforms that profit from anonymous engagement loops will resist these changes, just as the automotive industry once fought seat belt mandates. Eventually, democratic societies decide the social costs have become too high, and industries adapt. The current toll from unaccountable online speech — measured in democratic backsliding, mental health crises, and collapsed epistemic foundations — has reached that threshold.

From chaos to community online

To build a digital commons worthy of democratic ideals, we must strip the mask and escrow exceptions only for those who genuinely need protection. Let the comment section resemble a town hall again, not a gladiator pit engineered to extract profit from manufactured conflict.

The day we log on as ourselves — accountable for our words, responsible for their impact, invested in discourse quality rather than sheer volume — is the day the digital commons begins to resemble a republic, not a marketplace for outrage.

But this transformation will not happen by accident. It demands political will steeled against industry pressure, institutions willing to defend public reason, and a collective refusal to settle for algorithmic chaos as the cost of digital life. The alternative — outsourcing civic discourse to profit-maximising code that rewards humanity’s lowest instincts — can only end in democratic decline.

The internet’s promise was never about frictionless connectivity or infinite reach. It was about public life with dignity, consequence, and room for disagreement that doesn’t descend into spectacle or erasure. Anything less is an abdication.

We have tested every form of consequence-free speech, and the cost is now undeniable. The time for equivocation is over. The onus is on us to make online speech worthy of democratic citizenship  or we lose the republic, pixel by pixel, click by click.

Share this article