Banks have deployed machine learning in fraud detection and credit scoring for years, but scaling those systems into core operations — lending, payments, compliance — demands a level of governance most institutions have not yet formalized.
E.SUN Bank and IBM Consulting have responded to that gap by developing an AI governance framework specifically designed for financial services, according to the companies’ press release. The project also produced a white paper detailing how banks can build internal controls around AI systems from development through deployment.
The framework draws directly from two emerging global standards: the EU AI Act, adopted in 2024, and ISO/IEC 42001, published in 2023. Both are adapted here for the specific operational and regulatory demands of banking. The EU AI Act places strict requirements on AI used in high-risk sectors, requiring firms to assess risks, document training data, and monitor model behavior after deployment. ISO/IEC 42001 addresses how organizations build management systems for AI at an institutional level, rather than treating each model as a standalone tool.
What the Framework Covers
The governance structure addresses three distinct stages. Before deployment, models go through a structured review process. After they enter production, teams monitor outputs on an ongoing basis. Throughout, the framework assigns clear responsibility — spanning developers, risk managers, and compliance staff — for how each system behaves.
The accompanying white paper adds a classification layer, outlining how banks can tier AI systems by risk level and apply proportionate oversight to each. A system that surfaces answers to customer service queries carries different risk than one that influences loan approvals or flags potentially fraudulent transactions. The framework treats those distinctions as operational, not theoretical.
The core problem the project addresses is well-established in financial regulation. AI models frequently function as black boxes, producing results without legible reasoning. In credit decisions or fraud checks, that opacity creates direct regulatory exposure. Regulators across major jurisdictions have begun requiring firms to demonstrate not just that their models perform well, but that they can explain and audit how decisions are reached.
Industry Context
The governance effort arrives as AI adoption in financial services reaches broad penetration. A 2024 report by NVIDIA found that approximately 91% of financial services firms were either assessing or actively using AI, with fraud detection and risk modelling among the most common applications. Research from Deloitte shows that more than 70% of financial institutions plan to increase AI investment, with much of that spending directed at compliance monitoring and risk analysis.
That scale of adoption makes governance infrastructure a practical necessity rather than a compliance formality. E.SUN Bank says the framework is intended to help financial institutions introduce AI systems while maintaining regulatory oversight — positioning the work as a replicable model for the industry, not solely an internal exercise.
The project reflects a broader consensus forming across global banking: that deployment without structured oversight is the primary obstacle to moving AI from narrow applications into core operations at scale.
Photo by Leeloo The First on Pexels
This article is a curated summary based on third-party sources. Source: Read the original article