CybersecurityNews

Kaspersky: Cybercriminals use AI for more targeted attacks

According to Kaspersky, a cybersecurity solutions company, the use of artificial intelligence (AI) in launching malicious attacks goes beyond prompts to write malware.

Cybercriminals are now harnessing AI to aid in Advanced Persistent Threats (APTs), which represent a more targeted and sophisticated form of online assault.

“Nowadays, APT actors combine sophisticated techniques to evade detection and stealthy methods to maintain persistence,” said Noushin Shabab, senior security researcher for Global Research and Analysis Team (GReAT) Asia Pacific at Kaspersky. “New AI developments can be of assistance to cybercriminals from reconnaissance stage to data exfiltration.”

AI in cybersecurity: Friend or foe?
Kaspersky uncovers new mobile APT campaign vs iOS devices

Shabab noted that there are currently at least 14 active APT groups operating in the Asia Pacific (APAC) region.

In addition to malware creation, AI can also analyze data from multiple sources to identify potential targets. Automation expedites the process of building the attack infrastructure, including the purchase of network resources, account creation, and compromising network infrastructure and accounts.

“Smart machines can even spot the weak entry points by assessing the company’s employee details, third-party relationships, and network architecture,” Shabab said.

AI can also aid cybercrminals in using spear phishing by crafting emails targeted towards a specific individual, organization, or business. 

Spear phishing

“AI can analyze patterns in network and system activity and launch attacks during periods of low-security vigilance or high noise,” Shabab said.”Machines can assist cybercriminals to find the best timing for a phishing campaign to get initial access into the victim’s networks.”

The majority of APT actors in the APAC region still rely heavily on phishing as their primary initial access technique.

When it comes to execution, AI, as machine learning, can adapt its malware’s behavior in response to security measures, thereby increasing the likelihood of a successful attack.

“AI-based obfuscation can also create polymorphic malware that changes its code structure to evade detection,” Kaspersky explained.

AI-driven social engineering tactics could also heighten the probability of users engaging with malicious files, enhancing the success of the execution phase.

“AI can analyze network traffic patterns in order to better blend in with the regular network behaviors and determine the most suitable communication channel to exfiltrate data for each victim. It can even optimize obfuscation, compression, and encryption of the stolen data to avoid abnormal traffic detection,” Shabab said.

1 reply »