Hitachi Vantara, the data storage, infrastructure, and hybrid cloud management subsidiary of Hitachi, Ltd., revealed that financial institutions face challenges in AI implementation due to data quality issues.
According to Hitachi Vantara, while 36% of respondents from the Banking, Financial Services, and Insurance (BFSI) sector recognize the importance of data quality for AI success, financial leaders tend to focus more on data security, resulting in gaps in AI performance and long-term return on investment (ROI).
“Financial institutions are rapidly adopting AI but are realizing that their data infrastructure is not ready,” said Joe Ong, vice president and general manager for Asean at Hitachi Vantara. “This global research reflects what we’re also hearing in Southeast Asia — that the real barrier to AI success isn’t the technology itself, but the ability to manage data securely, accurately, and at scale.”
About 48% of the survey respondents cited data security as their primary concern in AI implementation, caused by the risk of internal and external threats. The report found that 84% believe losing data to an attack or mistake would be catastrophic. However, the lack of focus on data quality has negatively impacted BFSI institutions, with AI models being accurate only 21% of the time. Data is available when needed just a quarter of the time.
Risks of hallucinations
“Reputational harm is a significant risk in financial services,” said Mark Katz, CTO of Financial Services at Hitachi Vantara. “If a chatbot inadvertently discloses sensitive information from training data, it could have serious repercussions. The cost of a wrong answer or a hallucination raises liability questions.”
Despite these challenges, AI adoption in the BFSI sector continues to grow. However, many organizations are implementing AI without sufficient preparation. About 71% of respondents admitted to testing and iterating on live AI implementations, while only 4% used controlled sandbox environments.
Importance of strengthening data foundations
To address these issues, the report recommends building a resilient AI infrastructure by fostering responsible experimentation, integrating sustainability practices, simplifying systems, and leveraging AI for defense. Two out of five BFSI leaders (42%) are building necessary AI skills through experimentation, while others are focusing on secure sandbox testing to reduce risks.
Ong emphasized the importance of strengthening data foundations for sustainable AI outcomes. “Financial organizations must focus on strengthening their data foundations to ensure AI delivers real, sustainable impact.”