Sophos, a cybersecurity-as-a-service provider, saw that scammers can easily exploit different types of artificial intelligence (AI), including ChatGPT, to use in widespread fraud even with minimal technical expertise.
While the potential for AI in cybercrime looms large, Sophos’ investigation into cybercriminal forums uncovered a mixed reception. Although AI applications are in their infancy among threat actors, discussions on the dark web highlight emerging concerns and explorations of AI’s potential in social engineering scams. This includes instances of AI usage in romance-based and crypto scams that have been observed.
“It’s natural — and expected — for criminals to turn to new technology for automation,” Sophos said.
Cybercriminals scam each other — Sophos
Malware economy is booming — Sophos
Sophos noted that the birth of spam emails marked a crucial moment in scamming technology, significantly altering the landscape of cyber threats. The emergence of new AI capabilities is poised to revolutionize the field yet again. Once an AI technology capable of generating fully automated threats becomes available, it is inevitable that individuals will leverage it. Already, we’ve witnessed the incorporation of generative AI components into traditional scams, like using AI-generated text or images to entice and deceive unsuspecting victims.
ChatGPT
Sophos revealed discussions surrounding compromised ChatGPT accounts for sale, attempts to bypass ChatGPT security measures, and the emergence of 10 ChatGPT derivatives allegedly intended for cyber-attacks and malware development. However, these malicious derivatives garnered mixed reactions within the cybercriminal community, with skepticism prevailing due to suspicions of potential scams orchestrated by the creators.
“We did see some cybercriminals attempting to create malware or attack tools using LLMs, but the results were rudimentary and often met with skepticism from other users,” said Christopher Budd, director, X-Ops research, Sophos.
Budd also highlighted that despite initial concerns about AI misuse by cybercriminals, their research indicates a prevailing skepticism. Forums examined on the dark web showcased limited engagement with AI-related discussions compared to other cyber topics like cryptocurrency. Instances of attempts to create malicious tools using AI were basic and met with doubt from fellow users. There were instances where a threat actor inadvertently revealed personal information while showcasing ChatGPT.