Many professionals and office workers now rely on artificial intelligence (AI) to perform certain tasks. While many companies encourage employees to use AI to boost productivity, some staff turn to unapproved AI tools without management consent: a practice known as Shadow AI.
Shadow AI occurs when employees use AI applications without the knowledge or oversight of the IT department. The trend is reminiscent of the earlier bring-your-own-device (BYOD) movement, which was criticized for exposing corporate data to security risks. Like BYOD, Shadow AI bypasses formal vetting processes, potentially creating gaps in governance and risk management.
Approved AI systems, on the other hand, go through structured reviews to ensure they meet an organization’s security, compliance, and operational standards before being integrated into daily workflows.
Simply put, sanctioned AI is managed and accountable, while Shadow AI operates unchecked and carries greater risk.
AI security testing firm Mindgard notes that Shadow AI mirrors the earlier issue of Shadow IT but with higher stakes, as AI tools often process sensitive code, proprietary business data, and regulated customer information.
In its survey of cybersecurity professionals at the RSA Conference 2025 and InfoSecurity Europe 2025, Mindgard found that 56% of respondents confirmed unapproved AI use in their organizations, while another 22% suspected it was occurring. The study also revealed that 87% of cybersecurity practitioners use AI in their own daily workflows, and nearly one in four admitted to using personal ChatGPT accounts or browser extensions outside formal approval, logging, or compliance.
“Shadow AI poses significant risks to organizations,” said Roshan Royan, Data and AI Security Technical Sales Leader, IBM APAC. “These risks include potential data privacy breaches when sensitive information is entered into unapproved tools, regulatory non-compliance from bypassing laws like GDPR or CCPA, heightened security threats from unvetted platforms, inconsistent AI use that fragments decision-making, and serious reputation or legal consequences from unauthorized applications.”

Once employees experience the efficiency of AI, it can be difficult to revert to older methods. With many AI tools offering free versions, the temptation to use them to ease workloads is high.
“Shadow AI often gains traction because the tools are easy to access, official systems can feel too restrictive, and employees want faster solutions to meet urgent needs without waiting for IT approval,” Royan said.
According to Royan, AI-powered chatbots are among the most common forms of Shadow AI. This is followed by the use of large language models (LLMs) and machine learning (ML) tools for data analysis, where employees may process company data to generate insights.
“But this can create security vulnerabilities when proprietary datasets unknowingly expose sensitive information to unauthorized tools,” he said.
Some employees also use AI-powered data visualization tools to generate heat maps and charts for illustrating complex information. While these tools can save time, uploading sensitive corporate data into unsecured platforms can lead to reporting errors, data breaches, or compliance violations.
Royan cited a 2023 incident in which employees of a major electronics company (inadvertently) leaked confidential information by entering proprietary source code and internal meeting notes into an AI chatbot. The leaks happened three times within 20 days, underscoring the risks of unvetted AI use.
Security measures and governance tools
Like other cybersecurity challenges, experts say visibility is key to tackling Shadow AI. This is another case of AI vs AI.
“Companies now deploy AI-powered security tools that monitor all AI models operating within their environment,” Royan said. “For these tools to be most effective, they should be integrated into existing data security software rather than used as standalone solutions. Confronting Shadow AI isn’t much different from addressing shadow data or other mature security issues.”
Royan cited IBM’s Guardium AI Security, which, when integrated with watsonx.governance, provides organizations with continuous visibility, governance, security, and compliance capabilities. The system can detect both sanctioned and unsanctioned AI usage within an enterprise’s IT environment.
Amazon Web Services (AWS) has also rolled out multiple safeguards for its generative AI services.
“Amazon Bedrock Guardrails allows customers to configure application-specific safeguards based on their responsible AI policies, including Automated Reasoning checks to verify the factual accuracy of generated content,” said Joel Garcia, Technology Lead for Strategic Initiatives, AWS for ASEAN.

AWS has introduced AI Service Cards to provide transparency on data usage and model provenance. The company has also integrated tools such as LLM-as-a-judge and retrieval-augmented generation (RAG) evaluation in Amazon Bedrock Evaluations, enabling customers to assess the accuracy, relevance, and reliability of AI outputs.
“To support human oversight, AWS offers monitoring and auditing features, as well as copyright indemnity coverage,” Garcia said. “These measures help customers maintain control over AI applications and manage risks tied to generated content.”
Responsible AI use culture
Royan emphasized that organizations must train employees to “demystify AI” and make them aware of the risks of unsanctioned use.
“When employees understand the risks of using unapproved AI tools, such as data breaches, compliance violations, and reputational damage, they are more likely to follow secure practices,” he said.
Best practices include setting clear usage policies aligned with industry regulations, involving multiple departments in policy creation, conducting regular security and compliance reviews, and offering employee training alongside feedback channels to keep policies effective.
Royan stressed that introducing guardrails should not mean stifling innovation. Shadow AI, he said, can become a source of valuable ideas if managed effectively. This can involve encouraging responsible experimentation within defined guidelines, monitoring for promising employee-driven solutions, and integrating validated tools into the official tech stack.
Regular reviews of Shadow AI applications can help identify which ones should be sanctioned. A controlled sandbox environment offers a secure space for testing AI tools without risking sensitive data, while pilot programs allow teams to trial innovations with clear oversight.
By fostering open collaboration between IT and other departments, organizations can convert Shadow AI from a security concern into a channel for innovation. Ultimately, providing employees with secure, approved AI tools that protect data privacy is a safer strategy than allowing them to rely on unvetted external solutions.
You must be logged in to post a comment.