NZ Regulators Push for Explainable AI as Insurers Expand Claims Automation

New Zealand’s insurance regulators are sharpening their focus on explainable artificial intelligence as insurers accelerate the use of automated systems in claims assessment and settlement. While automation has already transformed underwriting, claims processing is emerging as the next regulatory frontier, raising fresh questions about transparency, dispute resolution, and consumer trust.

Across the sector, insurers are increasingly deploying AI-driven tools to triage claims, detect potential fraud, estimate repair costs, and recommend settlement amounts. In motor insurance, image-recognition software can now assess vehicle damage within minutes of a claim being lodged. Health and travel insurers are using automated rules engines to validate claims against policy terms and flag anomalies for further review.

 
Industry data indicates that straight-through processing now accounts for a significant share of low-value claims, particularly in motor and contents insurance. For insurers, the benefits are clear. Automation reduces administrative overheads, shortens settlement timelines, and improves customer satisfaction scores by delivering faster outcomes.  

Regulators, however, are concerned that speed may be coming at the expense of explainability. The Financial Markets Authority has signalled that insurers must be able to clearly articulate how automated systems reach decisions, particularly when claims are declined, partially paid, or escalated for investigation.

Claims decisions sit at the most sensitive point of the insurer-customer relationship. When automation produces an adverse outcome, consumers often struggle to understand what went wrong. Unlike human assessors, algorithms cannot provide intuitive explanations unless they are specifically designed to do so.

 
The FMA has emphasised that insurers cannot rely on generic statements such as “the system determined the claim was outside policy coverage.” Instead, firms are expected to provide clear, policy-linked reasoning that customers can meaningfully challenge if they disagree.    

This regulatory stance reflects growing concern about fairness and access to redress. Consumer advocates argue that opaque automated decisions may discourage policyholders from disputing outcomes, particularly if they feel overwhelmed by technical explanations. There is also concern that automation could disproportionately affect vulnerable consumers who lack digital literacy.

As a result, insurers are rethinking how claims automation is implemented. Several firms have begun layering explainability tools over their AI systems, enabling them to generate plain-language summaries of decision pathways. Others are redesigning claims workflows to ensure that automated outcomes are reviewed by human assessors when certain thresholds are met.

 
Human-in-the-loop models are becoming a regulatory expectation rather than a best practice. The FMA has made it clear that insurers must identify which decisions are appropriate for full automation and which require human judgment. High-impact claims, complex medical cases, and disputed outcomes are areas where manual oversight remains essential.      

The issue of algorithmic bias has also entered the claims conversation. While fraud detection tools are valuable in protecting insurers from losses, regulators are wary of models that disproportionately flag claims from particular demographics or regions. Even unintentional bias can erode trust if certain groups feel unfairly targeted.

Insurers are increasingly conducting regular bias audits to assess whether automated tools are producing skewed outcomes. These audits often involve testing models against synthetic datasets designed to simulate a wide range of customer profiles. Findings are reported to senior management and, in some cases, shared with regulators as part of supervisory engagement.

The regulatory push for explainable AI is also influencing vendor relationships. Insurers are demanding greater transparency from technology providers, including access to model logic, training data assumptions, and performance metrics. Black-box solutions that cannot be adequately explained are becoming harder to justify.

 
From a commercial perspective, insurers acknowledge that enhanced explainability increases costs. Developing interpretable models and maintaining audit trails requires investment in specialist talent and governance frameworks. However, many view these costs as necessary to avoid reputational damage and regulatory intervention, maintains financial advisory firm Fintrade.      

New Zealand’s stance aligns with broader international developments. Regulators in Australia and Europe are similarly emphasising explainability in automated claims processing, signalling that insurers cannot outsource accountability to algorithms.

As automation deepens its footprint in claims handling, the balance between efficiency and fairness will remain under close scrutiny. Regulators are not seeking to halt innovation. Instead, they are insisting that technological progress be matched by transparency, oversight, and meaningful consumer protections.

Scroll to Top