Companies are increasingly using Agentic AI to improve productivity and free up employee time. It also promises a better customer experience with quicker response time. But experts warn of “agent drift,” a term used by research firm Gartner to describe when AI systems start making decisions that no longer align with their intended goals.

At IBM’s virtual media session, “Supercharging ASEAN’s Growth with Agentic AI,” Henke Yunkins, director of Regulation and Ethics at the Indonesia Artificial Intelligence Society (IAIS), stressed the need for oversight and control.

“The best thing about Agentic AI is that it allows you to operationalize your guardrails,” Yunkins said. “In our company, we audit each specific reasoning step: when an agent makes a decision, we compare it against our validation checkpoints.”

Yunkins added that retraining is important to improve performance over time.

“All these checks and passes, like traceability, scoring, transition, and checkpoints, can aid in retraining the agents to handle new data and ensure traceability through confidence scoring and validation checkpoints,” he said.

“I think for the large language models, there needs to be frequent benchmarking and retraining of these models to ensure that the agent doesn’t drift,” said Dr. Clifton Phua, director of Labs at the Infocomm Media Development Authority (IMDA).

He noted that having humans in the group is necessary to ensure that people also approve the decisions made by AI agents and help set boundaries, making sure there are certain things the agents cannot do or limits they cannot cross.

“At IBM, one of the simplest approaches we’ve found is to start at the very beginning of a project,” said Anup Kumar, CTO for Data and AI and head of Client Engineering Asia Pacific at IBM. “When we are building a solution for a customer, our first point of contact is often the design team. Their focus is on the overall customer experience and how the application feels and functions.

He shared an example of the guardrails IBM implements, which ensure that if the system doesn’t know the answer, it’s allowed to say so, or to escalate it to a human agent when needed. This prevents the system from giving wrong or misleading responses and saves users from spending hours frustrated.

“This is where the importance of the human touch comes in,” Jack Madrid, president and CEO of the IT and Business Process Association of the Philippines (IBPAP), said. “It’s about sensing when a customer is starting to feel frustrated. One solution, I believe, lies in how we design AI implementations. A more in-depth integration of training data tailored to specific processes is crucial. And something we haven’t really discussed yet is real-time agent auditability.”

While Agentic AI is given enough autonomy as it is designed to have the ability to reason, at the end of the day, having human oversight remains the safest way to prevent agent drift.

Get the latest before it trends. Follow Back End News on LinkedIn, Facebook, X, YouTube, and TikTok for updates and in-depth coverage across the tech and security landscape.

By Marlet Salazar

Marlet Salazar is a technology writer focusing on cybersecurity. In 2018, driven by her passion for the tech industry, she founded Back End News through bootstrapped funding. She honed her writing skills at the Philippine Daily Inquirer, rising from proofreader to desk editor through the years.

Discover more from Back End News

Subscribe now to keep reading and get access to the full archive.

Continue reading