A security expert at Kaspersky, a cybersecurity solutions provider, believes that artificial intelligence (AI) technology can augment the current cybersecurity defenses of organizations given the continuing skills gap in the industry.
According to Kaspersky, there is a shortage of 2.1 million cybersecurity professionals as of 2022. AI technology can boost organizations’ security postures, especially in automating certain tasks.
“As of 2022, APAC needs to meet a 52.4% cybersecurity talent gap as the region drives its digital economy,” said Saurabh Sharma, senior security researcher for Global Research and Analysis Team (GReAT) Asia Pacific at Kaspersky. “This urgent need can drive IT security teams to look into using smart machines in augmenting their organizations’ cyber defenses and AI can help in key areas like threat intelligence, incident response, and threat analysis.”
Based on earlier reports, cybercriminals also have a way of exploiting the technology and security teams can leverage AI to counter those attacks.
In terms of threat of analysis or the stage where cyber defenders try to understand the working of tools used in an attack, Sharma noted that technologies like ChatGPT can assist even in identifying critical components in a malware code, deobfuscating malicious script, and creating dummy web servers with particular encryption schemes.
“AI has clear benefits for cybersecurity teams, especially in automating data collection, improving Mean Time to Resolution (MTTR), and limiting the impact of any incidents,” Sharma said.
But the Kaspersky’s expert also revealed that for cyber incident response, AI can suggest anomalies in a provided set of logs, understand a security event log, generate how a particular security event log may look like, and suggest steps to look for an initial implant like web shell. Sharma, however, highlighted the limitations of AI in building and maintaining cyber defenses. He reminded enterprises and organizations in APAC:
Kaspersky advises organizations should observe the following to ensure a strong cyber defense:
- To focus on the augmentation of existing teams and workflows
- Transparency must be part of Generative AI exploration and application, especially when it provides incorrect information
- All interactions with Generative AI should be logged, made available for review, and maintained for the life of the products deployed in enterprises
“If utilized effectively, this technology can also reduce skill requirements for security analysts but organizations should remember that smart machines can augment and supplement human talent, but not replace it,” he adds.