Negombo, Sri Lanka – In recent years, artificial intelligence (AI) has rapidly evolved from a niche technology to a pervasive force influencing nearly every aspect of modern life. While AI has provided significant advancements in areas like healthcare, finance, and communications, it also presents new challenges, particularly in cybersecurity.
“Most of the artificial intelligence we see today revolves around machine learning,” said Alexey Antonov, team lead for Data Science at Kaspersky during the company’s annual APAC Cyber Security Weekend (CSW). “Machine learning, as the name suggests, refers to algorithms that can learn from data.”
A particularly powerful application of machine learning is in the deployment of artificial neural networks, such as those used in large language models (LLMs) like ChatGPT. These models can generate human-like text, providing great benefits to society, but they can also be exploited for evil purposes.
One of the most pressing concerns Antonov highlighted is the use of AI in cybercrime. Malicious actors have begun to harness AI to create sophisticated tools that can evade traditional security measures.
Antonov explained that it is possible to develop malicious software using AI and even to hide adversarial technologies within seemingly harmless applications. For example, an AI-powered application on smartphones could potentially lock all of the users’ inputs based on the motion sensor data, effectively taking over the device.
He also pointed to the growing threat of AI-enhanced password cracking. He referenced a recent data breach, known as RockYou2024, which involved the release of over 8.2 billion unique passwords.
“By training a special language model on these leaked passwords, we were able to demonstrate that AI could speed up the usual brute-force algorithms by three times,” Antonov said. “This means that a vast number of passwords, which were once considered safe, are now vulnerable.”
The implications of AI in social engineering attacks are equally concerning. Antonov emphasized that AI’s ability to generate content — text, images, or audio — can be used to create highly convincing phishing emails or to generate deepfake videos and audio messages. Imagine receiving a voice message from what sounds like a trusted friend or family member, asking for urgent financial help. It would be difficult to doubt the authenticity of such a message.

Deepfakes
The rise of deepfakes — videos or audio clips generated by AI that convincingly mimic real people — represents another significant threat. Deepfakes can be used to create entirely fictional characters or to impersonate real individuals.
Deepfakes are not limited to financial fraud. Antonov also highlighted the potential for more personal attacks, such as romance scams, where victims are tricked into believing they are communicating with a genuine person, only to be defrauded later.
Vulnerabilities in AI systems
While much of the focus has been on the offensive capabilities of AI, Antonov emphasized that AI systems themselves are also vulnerable to exploitation.
“AI models, especially large language models (LLMs), can be manipulated through carefully crafted inputs,” he said. “For instance, it’s possible to bypass some of the safety measures in these models by using adversarial language, convincing them to perform actions they were designed to avoid.”
Another area of concern is the potential for AI models to be compromised through adversarial attacks.
“By introducing subtle noise or changes to an input, it’s possible to fool an AI model into making incorrect decisions,” Antonov explained. “This could have serious consequences, particularly in critical systems where AI is relied upon for decision-making.”
Defensive AI
Despite the growing risks associated with AI, Antonov remains optimistic about the potential for defensive AI technologies to counter these threats.
While offensive AI is a real and growing concern, Kaspersky is also developing defensive AI tools. The company has implemented neural networks to recognize and block malicious software, and it uses machine learning to scan the internet for new threats.
“Our goal is to stay one step ahead of the adversaries,” Antonov said.
He emphasized the importance of a multi-faceted approach to cybersecurity, combining AI-driven solutions with human vigilance. He noted that there is no single product or solution that can address all the challenges posed by AI.
The rapid advancement of AI technology has also raised important questions about legal and ethical oversight.
Antonov noted the lack of comprehensive laws governing the use of AI, particularly when it comes to issues like the creation and distribution of deepfakes. He felt that this is a matter that governments and international organizations need to address urgently.
Antonov suggested that one potential solution could be the use of digital signatures or other forms of verification to protect personal images and prevent them from being misused by AI. However, he acknowledged that this is not a perfect solution and recognized that there is no foolproof way to safeguard against the misuse of AI-generated content, yet.
It has been emphasized again and again that AI will bring both opportunities and challenges.
“AI is a powerful tool, but like any tool, it can be used for good or ill,” he said. “By staying informed and proactive, we can mitigate the risks and harness the benefits of this transformative technology.”
You must be logged in to post a comment.