BNM’s AI Governance Framework Takes Shape Following 2025 Consultation

Bank Negara Malaysia’s August 2025 discussion paper on the use of artificial intelligence in financial services marked a clear inflection point in the country’s regulatory engagement with machine learning-driven finance. Rather than reacting to isolated incidents or headline risks, the central bank opted for an anticipatory approach, inviting structured industry feedback before codifying expectations. Six months on, that consultation process is now visibly shaping what market participants expect to see formalised through supervisory guidance and phased implementation during 2026.

The timing is not incidental. Malaysian fintech firms and incumbent financial institutions alike are scaling AI deployments across credit scoring, transaction monitoring, fraud detection, insurance underwriting, and increasingly, customer-facing engagement tools. These systems promise efficiency and risk sensitivity, but they also introduce opacity, dependency on complex data pipelines, and the possibility of unintended bias. BNM’s challenge, articulated clearly in the 2025 paper, has been to develop governance standards that preserve innovation momentum while anchoring accountability, explainability, and operational resilience firmly within existing prudential and consumer protection frameworks.

 
At the heart of the consultation lies an insistence that algorithmic decision-making must remain intelligible to both institutions and supervisors. BNM’s emphasis on traceable decision-making reflects a regulatory view that financial outcomes affecting individuals, such as credit approvals, pricing, or transaction blocks, cannot be shielded behind proprietary complexity. AI systems deployed in such contexts are expected to produce decisions that can be explained, reviewed, and, where necessary, contested. This expectation aligns AI governance not as a standalone regime but as an extension of long-established principles governing fair treatment and responsible conduct.  

From this principle flow several concrete institutional expectations. Firms are required to maintain robust model documentation that records not only technical architecture but also data inputs, training assumptions, and observed outcomes. Bias monitoring occupies a prominent place in the framework, particularly where automated underwriting or claims assessment may amplify historical inequities embedded in legacy datasets. Equally significant is the insistence on human oversight. For high-stakes decisions, automation cannot be absolute; meaningful human intervention must remain possible, both as a safeguard and as a signal of organisational accountability.

Data governance emerged as one of the most closely scrutinised dimensions of the consultation. The increasing use of alternative data sources, ranging from behavioural indicators to device-level signals and granular transaction histories, has materially enhanced risk modelling capabilities. However, it has also raised fundamental questions around consent, proportionality, and downstream use. BNM’s discussion paper makes clear that customer consent for non-traditional data cannot be implied or bundled ambiguously. Data collection must remain proportionate to legitimate business objectives, and institutions are expected to demonstrate restraint rather than technological maximalism.

 
Third-party risk features prominently in this conversation. As AI models are increasingly trained, hosted, or optimised by external vendors, often through cloud-based infrastructures, accountability cannot be outsourced. While Malaysia’s Personal Data Protection Act provides baseline privacy protections, BNM is layering sector-specific expectations around data lineage, access controls, auditability, and cross-border data flows. The direction of travel is clear: financial institutions remain responsible for outcomes even when underlying capabilities are procured rather than built in-house.    

Operational resilience forms the third pillar of the emerging framework. Mission-critical AI systems are expected to withstand not only routine performance fluctuations but also stress scenarios, including data corruption, model drift, or adversarial manipulation. BNM’s focus on stress testing, contingency planning, and human override mechanisms reflects broader global supervisory concerns that automation failures can propagate rapidly across interconnected systems. Vendor risk management, particularly for outsourced AI platforms, is no longer peripheral but central to resilience planning.

These concerns are echoed across the broader regulatory ecosystem. Insurance supervisors, in particular, have highlighted algorithmic fairness as a priority area, recognising that automated underwriting based on historical data can inadvertently replicate structural biases unless actively mitigated. The convergence of regulatory expectations across banking and insurance suggests a coordinated approach rather than fragmented oversight.

 
One of the more notable evolutions since the consultation has been the repositioning of BNM’s regulatory sandbox. Once viewed primarily as a space for experimentation, the sandbox is increasingly serving a dual function as an early governance validation environment. Fintech entrants expected to participate in 2026 are required not only to demonstrate technical novelty but also to show readiness for compliance, documentation, and risk controls. This signals a maturation of Malaysia’s fintech ecosystem, where innovation and governance are no longer sequential stages but parallel design considerations.      

Industry responses to this shift have been uneven but largely pragmatic. Larger institutions have broadly welcomed the clarity, viewing structured expectations as a means of reducing competitive ambiguity and aligning internal investments with supervisory priorities. Smaller fintech firms face more acute resource constraints, particularly around documentation and monitoring. However, BNM’s reiterated commitment to proportionality, with controls calibrated to institutional size and systemic impact, provides some mitigation.

 
Strategically, Malaysia’s approach positions regulatory credibility as a competitive differentiator within a crowded regional fintech landscape. As cross-border partnerships and institutional capital increasingly favour jurisdictions with predictable governance standards, structured AI oversight becomes an enabler rather than a constraint. It distinguishes compliant innovation from unchecked experimentation and strengthens Malaysia’s appeal as a base for regionally scalable financial services.      

Looking ahead through 2026, the transition from consultation to implementation appears deliberate rather than abrupt. Firms that embed governance considerations at the design stage, treating transparency and accountability as functional requirements rather than compliance afterthoughts, are likely to gain a first-mover advantage across credit, payments, and insurtech. The underlying message is consistent and unambiguous: innovation without accountability invites regulatory friction, while innovation grounded in governance builds market confidence and long-term resilience.

Scroll to Top