Cybercriminals are always looking for ways to exploit new technologies, including generative artificial intelligence (GenAI). While most AI models have safeguards to prevent misuse, security researchers at Tenable found that DeepSeek R1 can be tricked into generating malware, raising concerns about AI-driven cyber threats.

Tenable researchers tested DeepSeek R1’s ability to create malicious software by attempting to generate a keylogger and a simple ransomware program. At first, the AI model refused to comply. However, using common jailbreaking techniques, researchers bypassed its safeguards.

“Initially, DeepSeek rejected our request to generate a keylogger,” said Nick Miles, staff research engineer at Tenable. “But by reframing the request as an ‘educational exercise’ and applying common jailbreaking methods, we quickly overcame its restrictions.”

Generating keylogger

Once the restrictions were bypassed, DeepSeek R1 was able to generate a keylogger that encrypts logs and stores them discreetly on a device. It also produced a ransomware executable capable of encrypting files.

The study raises concerns that AI could make cybercrime more accessible. While DeepSeek R1’s output still requires manual adjustments to work properly, it lowers the barrier for people with little to no coding experience to develop malware. By providing foundational code and suggesting relevant techniques, AI models could help cybercriminals learn faster.

“Tenable’s research highlights the urgent need for responsible AI development and stronger guardrails to prevent misuse. As AI capabilities evolve, organizations, policymakers, and security experts must work together to ensure that these powerful tools do not become enablers of cybercrime,” said Miles.

Discover more from Back End News

Subscribe now to keep reading and get access to the full archive.

Continue reading