97,895 underground forum conversations tell you: hacker communities actually also hate AI

University of Edinburgh, University of Cambridge, and University of Strathclyde joint research analyzes 97,895 underground forum conversations spanning from 2022 to the end of 2025, finding that AI has not significantly lowered the hacking barrier nor caused major disruptions to existing criminal business models.
(Background summary: Morse code tricks Grok, BankrBot transfers instantly: hackers steal $170k DRB, AI wallet first compromised)
(Additional background: Trump wants AI models to be reviewed before release: Mythos alarms the White House, Pentagon to lead security testing)

Table of Contents

Toggle

  • The reputation wars on forums, AI as a disruptor
  • 97,895 conversations overturn the panic that “AI lowers hacking barriers”
  • AI slop is a common enemy of global communities

A three-year study of nearly 100,000 underground forum conversations arrives at an unexpected conclusion for the cybersecurity industry: the people who hate AI-generated content the most may be the very ones you least sympathize with.

The daily life of hacker forums isn’t about teaching you how to breach defenses, but about criticizing people for lazily using AI to write content. This counterintuitive scene comes from a cross-institutional study led by Edinburgh University cybersecurity researcher Ben Collier, in collaboration with Cambridge and Strathclyde universities. They systematically analyzed conversations from 2022 to 2025, with a sample size of 97,895, attempting to answer an industry-long-standing question: How much has AI actually made hackers stronger?

The reputation wars on forums, AI as a disruptor

To understand why hackers despise AI-generated spam, you first need to understand their community structure. Underground forums are not just black markets for trading stolen data; they are also a “reputation economy”—meaning members’ status, trustworthiness, and influence are built on peer recognition of their technical skills.

Forums have point systems, writing contests, and a culture of roasting each other. The core logic is not fundamentally different from Stack Overflow or GitHub: you have to prove you really know your stuff.

AI slop (referring to大量低品質、由 AI 批次生成的文字內容) causes disruption in this ecosystem, just as it does on legitimate platforms. A user on Hack Forums wrote: “I see many people using AI to write content, and it makes me angry. They’re too lazy to write two sentences themselves.” Another user said: “If I want to chat with AI, there are plenty online. I come here to interact with real people.”

Ben Collier’s observation highlights a structural contradiction: “They are actually somewhat conflicted about AI because it shakes the foundation of their claim to be tech experts.” Once any novice can post seemingly professional penetration testing tutorials, the prestige that once came from rare knowledge gets diluted.

97,895 conversations overturn the panic that “AI lowers hacking barriers”

The core finding of this research is a cooling of the mainstream cybersecurity narrative. The conclusion clearly states:

“AI has not significantly lowered technical barriers nor caused major impacts on existing business models or operational practices. Its main influence is concentrated in highly automated areas, including SEO scams, social media bots, and some forms of emotional scams.”

In other words, what AI has truly changed are scenarios that didn’t require high technical skills to begin with. Emotional scammers use AI to translate messages and make social engineering attacks more fluent; SEO farms generate大量內容 using AI; social bots produce more natural responses with AI. These changes occur at the low-barrier batch operation level, while the core hacking skills—penetration testing, vulnerability research, zero-day exploits—show almost no measurable traces of AI influence.

The widespread discussion of Anthropic’s latest cutting-edge model Claude Mythos Preview among hacker communities once caused some concern in cybersecurity circles. But Collier’s research suggests that the panic may be disproportionate to the actual threat.

AI slop is a common enemy of global communities

The problems faced by hacker forums are very similar to those on Reddit and technical discussion boards: AI spam degrades discussion quality, dilutes expertise signals, and causes real users to leave. The study even records that, due to Google AI summaries stealing traffic directly, some forums are experiencing declining visitor numbers.

This isn’t an issue unique to criminal communities; it’s a structural pressure on the entire online ecosystem.

Some hackers are not entirely opposed to AI; they are willing to accept AI assistance in correcting grammar or adjusting structure, but they strongly disdain “AI writing everything for you.” This attitude aligns almost perfectly with mainstream views in many technical writing communities.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin