As more companies turn to artificial intelligence (AI) to stay competitive, cybersecurity firm Tenable is warning that many are not fully prepared to secure the open-source tools and cloud services that power their AI projects.
In its newly released Cloud AI Risk Report 2025, Tenable found that the rapid growth of AI adoption is creating hidden risks across cloud environments. These include unpatched vulnerabilities, misconfigured services, and exposed data, often stemming from the heavy use of open-source tools.
“Organizations are rapidly adopting open-source AI frameworks and cloud services to accelerate innovation, but few are pausing to assess the security impact,” said Nigel Ng, senior vice president at Tenable Asia Pacific and Japan. “The very openness and flexibility that make these tools powerful also create pathways for attackers.”
The report analyzed AI-related workloads from December 2022 to November 2024 across major platforms, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). It found that many developers are integrating open-source packages like Scikit-learn and Ollama — used in 28% and 23% of AI workloads, respectively — without always checking them for security issues.
Heavy reliance on open-source libraries
Ng explained that these open-source tools can introduce vulnerabilities because of their complex dependency chains. Most AI workloads run on Unix-based systems, which rely heavily on open-source libraries. This setup can allow older, unpatched weaknesses to remain active, increasing the risk of data exposure or manipulation of AI models.
Tenable also found that businesses are relying heavily on managed cloud services to run their AI systems. For example, 60% of companies using Microsoft Azure had set up Azure Cognitive Services, while 25% of AWS users deployed Amazon SageMaker. These tools offer speed and scale, but can introduce problems if default settings are not adjusted to suit a secure setup.
“AI will shape the future of business, but only if it is built on a secure foundation,” Ng said. “Open-source tools and cloud services are essential, but they must be managed with care. Without visibility into what is being deployed and how it is configured, organisations risk losing control of their AI environments and the outcomes those systems produce.”