Kaspersky, a cybersecurity solutions company, conducted an experiment to check ChatGPT’s cybersecurity knowledge. The research also hoped to detect the AI-powered language model’s ability to detect phishing links.
In a media release, Kaspersky said its experts tested gpt-3.5-turbo, the model that powers ChatGPT, on more than 2,000 links that the company’s anti-phishing technologies deemed phishing, and mixed with thousands of safe URLs.
“The study revealed that although ChatGPT knows a great deal about phishing and can guess the target of a phishing attack, it had high false positive rates of up to 64%,” Kaspersky said in a statement. “Often, it produced imaginary explanations and false evidence to justify its verdicts.”
Kaspersky finds no links between Tomiris and Turla APT groups
Kaspersky finds Nokoyawa ransomware used Windows zero day vulnerability
In the experiment, detection rates vary depending on the prompt used. The experiment was based on asking ChatGPT two questions:
1. Does this link lead to a phishing website?
2. Is this link safe to visit?
The results showed that ChatGPT had a detection rate of 87.2% and a false positive rate of 23.2% for the first question. The second question, “Is this link safe to visit?” had a higher detection rate of 93.8%, but a higher false positive rate of 64.3%.
AI
“While the detection rate is very high, the false positive rate is too high for any kind of production application,” Kaspersky said.
Kaspersky wants to establish if ChatGPT, for all its artificial intelligence and the crazy amount of data it studied, can help in classifying and investigating cyber attacks.
“Since attackers typically mention popular brands in their links to deceive users into believing that the URL is legitimate and belongs to a reputable company, the AI language model shows impressive results in the identification of potential phishing targets,” Kaspersky said.
The experiment also showed that ChatGPT has successfully extracted a target from more than half of the URLs, including major tech portals like Facebook, TikTok, and Google, marketplaces such as Amazon and Steam, and numerous banks from around the globe, among others, — without any additional training.
The experiment also showed ChatGPT might have serious problems when it comes to proving its point on the decision of whether the link is malicious. Some explanations were correct and based on facts, while others revealed known limitations of language models, including hallucinations and misstatements: many explanations were misleading, despite the confident tone.
Categories: News
1 reply »