Site icon Back End News

AI adoption outpaces security controls, TrendAI study says

Agentic AI - Chatbot - Back End News

Philippine companies are adopting artificial intelligence (AI) tools faster than they can secure and govern them, according to new research from global AI security company TrendAI.

The TrendAI study surveyed 3,700 IT decision makers (ITDMs) and business decision makers (BDMs) worldwide, including 200 respondents from the Philippines. Results showed that 67% of ITDMs and 72% of BDMs in the country felt pressured to approve AI projects despite security and compliance concerns.

One in seven respondents described those concerns as “extreme,” but still moved forward to keep up with competitors and internal demand for AI adoption.

The findings highlight a growing gap between AI deployment and governance readiness. While many Philippine organizations said they already use data security measures (76%) and data quality management practices (72%), only 33% of ITDMs and 44% of BDMs said they feel highly prepared for the pace of AI adoption.

“Organizations are not lacking awareness of risk; they’re lacking the conditions to manage it,” said Rachel Jin, chief platform & business officer, head of TrendAI.

“When deployment is driven by competitive pressure rather than governance maturity, you create a situation where AI is embedded into critical systems without the controls needed to manage it safely,” Jin said.

TrendAI said many organizations still lack clear AI governance frameworks. More than half of respondents said comprehensive AI policies are already in place, but around one-third are still drafting them. The biggest obstacles include unclear regulations or compliance standards (53%) and lack of security or data expertise (51%).

The report also pointed to increasing cybersecurity risks tied to AI systems. TrendAI said attackers are already using AI to automate reconnaissance, accelerate phishing campaigns, and make cybercrime easier to execute at scale.

Concerns are also growing around agentic AI, or AI systems capable of acting autonomously with limited human intervention. While 60% of respondents believe agentic AI could improve cyber defense in the short term, organizations remain worried about oversight and control.

More than half of Philippine organizations identified AI agents accessing sensitive data as the top risk. Other concerns include malicious prompts (53%), abuse of trusted AI status (51%), and a growing attack surface for cybercriminals (36%). Another 34% cited risks linked to autonomous code deployment.

The study also found that 30% of organizations admit they lack visibility or auditability over AI systems, making it difficult to monitor or intervene once autonomous systems are deployed.

As concerns grow, nearly 65% of Philippine organizations support the use of AI “kill switch” mechanisms that can shut down systems in case of failure or misuse. However, 33% remain unsure, showing that many companies still lack consensus on how to control autonomous AI technologies.

“Agentic AI is moving organizations into a new risk category,” Jin said. “Without visibility and control, organizations are deploying systems they don’t fully understand or govern, and that risk is only going to increase unless action is taken.”

For Philippine businesses, the findings reflect a bigger challenge as companies race to adopt AI tools while regulators, security teams, and governance policies struggle to keep pace. The issue is becoming more critical as AI systems gain access to sensitive corporate and customer data across industries such as banking, telecommunications, retail, and government.

For cybersecurity vendors and service providers, the rapid expansion of AI adoption also creates new demand for governance tools, threat monitoring, compliance services, and AI-specific security solutions in the local market.

Exit mobile version