Artificial intelligence (AI) is reshaping how cybercriminals conduct phishing scams, making these attacks harder to identify, even for experienced professionals. A study by Kaspersky highlights how AI has become a critical tool in crafting more personalized and effective phishing campaigns.
According to the report, 49% of respondents noted an increase in phishing incidents in the past year, with half anticipating more attacks as AI technology advances. The sophistication of AI-driven scams is changing the landscape of cybersecurity.
Kaspersky said that phishing is no longer just about sending random emails and hoping someone clicks. AI allows attackers to tailor messages that feel personal and genuine.
AI’s role in phishing
AI enables attackers to analyze publicly available information, such as social media profiles and company websites, to create personalized emails. For example, a chief financial officer might receive an email that looks and sounds like it came from their CEO, referencing recent company events. This level of detail often makes malicious messages indistinguishable from legitimate ones.
Cybercriminals are also using deepfake technology to impersonate executives in audio and video messages. One notable case involved attackers creating a fake video conference, convincing an employee to transfer $25.6 million.
“Deepfakes are a game-changer in phishing. The technology allows scammers to mimic voices and appearances with alarming accuracy,” Kaspersky said.
Evading traditional defenses
AI also helps attackers bypass email security systems. By mimicking legitimate email patterns, phishing messages often slip through detection software. Machine learning tools refine these campaigns in real time, increasing their effectiveness.
The psychological aspect of phishing further complicates detection. Many scams create a sense of urgency or fear, pressuring employees to act quickly without verifying the request’s authenticity.
Building stronger defenses
Kaspersky emphasizes the need for a multi-layered approach to cybersecurity. Regular training on AI-focused phishing tactics is essential, helping employees recognize subtle signs of scams. Businesses should also adopt advanced tools that detect anomalies in emails and enforce a zero-trust security model to limit access to sensitive data.
“AI is making phishing more dangerous, but the right combination of technology and training can mitigate the risks,” Kaspersky said.