LAS VEGAS — Multiple studies have shown that artificial intelligence (AI) has accelerated security risks as machine speed and new threats are expected to increase. During a security session at AWS re:Invent 2025, security experts from the company highlighted how the newly launched AWS Security Agents can make sense of signals from different sources.

As more companies use agent-based systems, connecting signals from different tools and responding to incidents has become more complex, increasing the need for automated, real-time detection and response.

““We know that every customer wants their products to be secure, but you have trade-offs,” Matt Garman, CEO of AWS, said in his keynote. “Where do you spend your time? Do you prioritize improving the security of existing features, or do you prioritize shipping new ones?”

Increasing demand for protection is shown in global spending forecasts. Investment in advanced security tools is expected to grow from $213 billion in 2025 to $377 billion by 2028 as organizations adopt generative AI and expand their cloud operations.

AWS outlined new AI and automation tools aimed at helping customers detect risks earlier. The updates include AI-powered assistants that identify issues, automated systems that flag unusual activity, and tools that manage user access more flexibly. The company said the goal is to help organizations secure applications, networks, and data before problems disrupt operations.

One of the key additions is the AWS Security Agent, which is being built directly into development and security workflows. The tool performs automated design and code reviews, gathers signals for incident response, and conducts context-aware penetration testing. By checking for risks from design to deployment, the agent aims to reduce security gaps earlier in the development cycle.

Neha Rungta, director of Applied Sciences at AWS, said traditional penetration testing often relies on generic checklists and takes weeks to complete. She said the new agent considers business logic and context, allowing tests to be done on demand. 

“The security agent is one of these frontier AI agents, a sophisticated class of AI agents that are autonomous and scalable and can work for long periods without human intervention,” Rungta said. “Security doesn’t have to be an afterthought.”

Panel discussion at AWS re:Invent 2025 featuring experts discussing security and the rise of AI agents. The background displays the session title and speaker names.
From left: Amy Herzog, CISO, AWS; Hart Rossman, VP, Office of the CISO, Neha Rungta, director, Applied Science, AWS, and Gee Rittenhouse, VP, Security Services, AWS.

The AWS Security Agents can be run as often as needed and returns findings within hours. This makes penetration testing more practical for teams with fast release cycles, where delays can slow product updates. Rungta said the tool aims to give teams stronger assurance without needing deep expertise in logic or security. 

“For aspects of security, the assurance you get out of the box is proof-based, without having to be math experts, logic experts, or even security experts,” she said.

AWS said the long-term goal is to help reduce common application security issues over the next few years. Remaining challenges include how to monitor AI agents, define their scope, and make sure they do not block legitimate activity or slow down services.

Rungta said the ability to run tests on demand could change how teams secure software. 

“What used to take months can now be done in hours. You can run it for any application, and you can run it as often as you want,” she said.

Aside from the introduction of AWS Security Agents, the company also expanded its agentic AI portfolio with updates to two security capabilities to improve incident response and tighten access controls for AI-driven systems. 

AWS Security Incident Response, adds agentic AI-powered investigation capabilities. The feature analyzes event data, identifies likely causes, and recommends next steps, helping teams respond and recover more quickly. By automating parts of the investigation process, it reduces the time security teams spend correlating logs, checking configurations, and validating threat indicators across systems.

AgentCore Identity allows authentication and access rules to be applied directly to AI agents, ensuring they interact only with authorized services and data. The system uses user permissions and attributes to set boundaries, reducing the risk of unauthorized access or data exposure when AI agents operate across multiple enterprise applications.

By Marlet Salazar

Marlet Salazar is a technology writer focusing on cybersecurity. In 2018, driven by her passion for the tech industry, she founded Back End News through bootstrapped funding. She honed her writing skills at the Philippine Daily Inquirer, rising from proofreader to desk editor through the years.

Discover more from Back End News

Subscribe now to keep reading and get access to the full archive.

Continue reading