Regulators Confront the Explainability Gap in New Zealand’s Insurance AI

As artificial intelligence systems take on a more decisive role within New Zealand’s insurance sector, regulators are confronting a challenge that sits at the intersection of technology, law and public trust. While insurers increasingly rely on machine learning models to determine premiums, assess risk and influence claims outcomes, the logic underpinning these decisions is often difficult to articulate in clear and auditable terms. This so-called explainability gap has become a focal point of regulatory concern as authorities assess whether existing oversight frameworks are equipped for an era of algorithmic decision-making.

At the heart of the issue is the nature of modern AI systems. Unlike traditional rules-based models, machine learning algorithms derive insights by identifying patterns across vast datasets. These patterns can be highly effective in predicting outcomes but are not always intuitive or transparent. Even when models perform accurately, explaining why a specific customer was charged a higher premium or why a claim was flagged as high risk can be challenging.

 
For insurers, explainability is no longer a theoretical concern. Under New Zealand’s conduct and disclosure obligations, insurers must be able to provide clear reasons for decisions that materially affect customers. This includes underwriting outcomes, premium adjustments and claims determinations. When decisions are influenced by AI, the burden of explanation does not diminish. Instead, it becomes more complex.  

Regulatory bodies are taking notice. Officials at the Reserve Bank of New Zealand, which oversees prudential regulation of insurers, have acknowledged that opaque models introduce supervisory risk. If regulators cannot understand how risk assessments are produced, it becomes harder to evaluate whether insurers are managing capital prudently or exposing themselves to correlated losses during stress events.

The Financial Markets Authority, responsible for conduct regulation, faces a parallel challenge. From a consumer protection standpoint, unexplained or poorly explained decisions undermine confidence in financial markets. Regulators are increasingly concerned that customers may be left with generic explanations that satisfy formal requirements but fail to convey meaningful understanding.

 
In response, insurers are experimenting with various approaches to improve explainability. Some are adopting hybrid models that combine machine learning with interpretable rules-based components. Others are investing in explainable AI tools designed to identify which variables most influenced a particular decision. These tools can generate summaries intended for regulators and customers alike.  

However, such solutions are not without limitations. Simplified explanations may obscure underlying complexity or mask interactions between variables. There is also a risk that explanations become performative rather than substantive, offering reassurance without genuine insight. Regulators are wary of accepting explanations that cannot be independently validated.

The debate has drawn in legal experts who note that explainability has implications beyond regulation. In the event of disputes or litigation, insurers may be required to defend their decisions in court. Judges and arbitrators are unlikely to accept reasoning that cannot be articulated in comprehensible terms. This creates pressure on insurers to ensure that AI-driven decisions remain legally defensible.

 
Data governance further complicates the picture. Many AI models rely on proprietary datasets or third-party data sources. Insurers may be reluctant to disclose details that they consider commercially sensitive. Regulators, however, argue that supervisory access to model logic and data inputs is essential for effective oversight.  

International developments are shaping the domestic conversation. The European Union’s proposed AI governance regime places strong emphasis on transparency and explainability, particularly for high-risk applications such as insurance. The United Kingdom has issued guidance encouraging firms to ensure that automated decision-making remains understandable and contestable. New Zealand regulators are studying these approaches as they consider whether additional guidance or standards are required locally.

Industry bodies have begun advocating for clarity. Insurers argue that without clear regulatory expectations, firms may over-invest in compliance measures that stifle innovation or under-invest and face enforcement risk later. Some have called for sector-specific guidance that recognises the unique characteristics of insurance while aligning with broader AI governance principles.

Academics and AI ethicists caution against viewing explainability as a purely technical problem. They argue that meaningful explanation depends on context and audience. What satisfies a data scientist may not satisfy a regulator or a customer. Developing explanations that are both accurate and accessible requires interdisciplinary collaboration.

 
As the use of AI in insurance deepens, regulators are likely to move from informal dialogue to more structured expectations. This could include requirements for model documentation, independent audits or periodic reviews of automated decisioning systems. Whether such measures are introduced through formal regulation or supervisory guidance remains an open question.    

For now, the explainability gap remains one of the most significant fault lines in New Zealand’s insurance AI landscape. Insurers that treat explainability as a core governance issue rather than a compliance afterthought may be better positioned to navigate the evolving regulatory environment. Those that do not risk finding themselves exposed to supervisory action, legal challenge and erosion of public trust.

The question facing the sector is not whether AI can make accurate decisions, but whether those decisions can be understood, justified and trusted. The answer will shape how far and how fast artificial intelligence is allowed to transform New Zealand’s insurance industry.

Scroll to Top