The Challenge of AI in the Criminal Underworld
In recent years, the rise of artificial intelligence has sparked widespread concern about its potential misuse by malicious actors. However, new research from the University of Edinburgh offers a surprisingly reassuring perspective. By analyzing over 100 million posts from underground cybercrime forums, researchers found that cybercriminals are struggling to effectively integrate advanced AI into their operations. While there is palpable interest in tools like chatbots and coding assistants, the technology has yet to revolutionize hacking or significantly boost criminal capabilities. This indicates that, despite fears, the digital underworld is not being supercharged by AI in the ways many predicted.
A Tool for the Skilled, Not a Shortcut for Novices
The study reveals a fundamental limitation: AI coding assistants are most beneficial for those already proficient in programming. For seasoned hackers, tools like OpenAI’s Codex can serve as a digital partner, answering technical questions or generating quick-reference “cheatsheets.” Yet, they do not provide a meaningful advantage in breaking security systems or developing novel exploits. As one forum user bluntly stated, one must first learn the ropes of programming independently to truly benefit from AI. This requirement creates a high barrier to entry, preventing unskilled would-be criminals from using AI as a magic key to sophisticated cybercrime. Consequently, AI has not democratized high-level hacking but has instead remained a supplementary tool for the already capable.
Where AI Does Make an Impact: Low-Skill Fraud
Where cybercriminals have found success with AI is in less technically demanding, easily automated schemes. These include creating social media bots, orchestrating romance scams, and engaging in search engine optimization (SEO) fraud—such as generating fake websites that climb search rankings to generate illicit ad revenue. These activities rely more on volume and persuasion than on complex code, making them ideal for automation with current AI models. This shift shows that AI’s primary criminal utility today lies in scaling traditional scams rather than pioneering new forms of cyberattacks, highlighting a gap between the hype and the practical reality of AI-enabled crime.
The Guardrails Are Holding—For Now
A critical finding is that the safety measures implemented by leading AI companies appear to be effective. Cybercriminals frequently discuss mainstream models like Anthropic’s Claude but report great difficulty in bypassing their built-in ethical safeguards and security protocols. Attempts to “jailbreak” these systems to generate malware or phishing emails often fail, forcing criminals to seek alternatives. This success of corporate guardrails is a significant, though possibly temporary, win for cybersecurity, suggesting that responsible AI development can actively hinder illicit use.
The Turn to Inferior Alternatives
Frustrated by robust safety features, some cybercriminals are pivoting to older, lower-quality open-source AI models. These models are easier to manipulate but come with substantial drawbacks: they are less capable, often produce unreliable outputs, and demand considerable computing resources to operate effectively. This trade-off means that even when criminals circumvent restrictions, they gain access to tools that are inefficient and cumbersome. The research indicates that this dynamic significantly limits the operational impact of AI in cybercrime, as the most powerful and user-friendly models remain off-limits for overtly malicious purposes.
A Nuanced Outlook on AI and Security
Overall, the study paints a nuanced picture. While AI has found a niche in automating certain fraudulent activities, it has not become the game-changer for serious cybercrime that many feared. The technical barriers, coupled with successful safety guardrails, have contained its misuse. This does not mean complacency is warranted—technology and criminal tactics will continue to evolve—but for now, the narrative of AI as an immediate and overwhelming force in the hacking world appears overstated. The real story is one of limitation and frustration for cybercriminals, offering a measure of reassurance about the current resilience of our digital ecosystems.












