ChatGPT, a text-based artificial intelligence (AI) bot has made headlines for its use of advanced AI (artificial intelligence). From accurately fixing a coding bug, generating cooking recipes, creating 3D animations, to composing entire songs, ChatGPT has showcased the mindblowing power of AI to unlock a world of incredible new possibilities and new risks.
Cybersecurity solutions company Palo Alto Networks said that since its launch on November 2022, tech experts and commentators worldwide have been concerned about the impact AI-generated content tools will have on cybersecurity.
In the recent Black Hat and Defcon security conferences, a demonstration of hacking humans with AI-as-a-service revealed howAI can actually craft better phishing emails and devilishly effective spear phishing messages than people.
Palo Alto Networks bolsters SaaS applications security
Palo Alto Networks powers PLDT-Smart’s child protection platform
“Researchers using OpenAI’s GPT-3 platform in combination with other AI-as-a-service products focused on personality analysis generated phishing emails that were generated customised using their colleagues’ backgrounds and characters,” the company explained on a media release. “Eventually, the researchers developed a pipeline that refined the emails before hitting their targets. To their surprise, the platform also automatically supplied specifics, such as mentioning a Singaporean law when instructed to generate content for people in Singapore.”
The makers of ChatGPT have clearly suggested that the AI-driven tool has the in-built ability to challenge incorrect premises and reject inappropriate requests. While the system apparently has inbuilt guardrails designed to prevent any kind of criminal activities, however, with a few tweaks, it generated a near flawless phishing email that sounded “Weirdly Human.”
This could mean more trouble for markets that are highly vulnerable to phishing attacks such as the Philippines. In fact, the enormity of phishing campaigns in the country prompted the government to begin investigations. This resulted in the approval of the SIM Card Registration Act, a law that mandates users to register personal information upon SIM card activation and purchase in an effort to encourage responsibility and give law enforcement an identification tool for resolving crimes.
“Considering the looming threats of an ever smarter and technologically advanced hacking landscape, the cybersecurity industry must be equally resourced to fight such AI-powered exploits,” said Sean Duca, VP and regional chief security Officer for Asia Pacific & Japan at Palo Alto Networks. “In the long run, the industry’s vision cannot be that a swarm of human threat hunters try to sporadically fix this with guesswork.”
The need of the hour is to take intelligent action to neutralise these evolving threats. On the positive side, Autonomous Response is significantly addressing threats without human intervention. However, as AI-powered attacks become a part of everyday life, businesses, governments and individuals impacted by such automated malware must increasingly rely on emerging technologies such as AI and ML to generate their own automated responses.
As AI continues to be developed, businesses will continue to face a number of challenges in navigating the AI cybersecurity landscape. In particular, there is considerable focus on finding the balance between machines, humans, and ethical considerations.
“Establishing corporate policies is critical to doing business ethically, while improving cybersecurity,” Duca said. “We need to establish effective governance and legal frameworks that enable greater trust in AI technologies being implemented around us to be safe, reliable, and contribute to a just and sustainable world. The delicate balance between AI and humans will therefore emerge as a key factor towards successful cybersecurity in which trust, transparency, and accountability supplement the benefits of machines.”