Across New Zealand’s insurance sector, claims processing has emerged as the most visible frontier of artificial intelligence adoption. What was once a labour-intensive process involving adjusters, assessors and multiple layers of review is increasingly being streamlined through automated systems capable of triaging, approving and rejecting claims within minutes. Insurers say the transformation is essential to meet rising customer expectations and manage cost pressures. Regulators and consumer advocates, however, are beginning to ask whether speed and efficiency are coming at the expense of fairness and trust.
Over the past two years, several major insurers have deployed AI-enabled claims platforms that rely on image recognition, natural language processing and predictive analytics. Motor insurance claims are often the first to be automated, with customers uploading photographs of vehicle damage that are assessed by algorithms trained on thousands of historical cases. Health and travel insurance claims are also being routed through automated decision engines that flag routine cases for instant settlement while escalating complex claims for human review.
Insurers argue that automation has delivered tangible benefits. Average claims settlement times have fallen sharply, operational costs have declined and fraud detection rates have improved. For straightforward claims, particularly low-value ones, automation has reduced friction and improved customer satisfaction scores. In an increasingly competitive market, these gains are difficult to ignore.
Yet claims processing is also where insurers’ obligations to policyholders are most directly tested. When an automated system declines a claim or reduces a payout, the customer’s recourse depends on how transparent and contestable that decision is. Activists warn that automated claims decisions can feel arbitrary when customers are unable to understand how conclusions were reached.
The Insurance and Financial Services Ombudsman has reported a steady increase in complaints where customers cite dissatisfaction with automated processes, particularly where explanations are perceived as generic or incomplete. While not all such complaints involve AI, the growing use of automated systems has heightened scrutiny of how decisions are communicated and reviewed.
Regulators are responding cautiously. The Financial Markets Authority has reiterated that insurers remain fully accountable for claims decisions, regardless of whether those decisions are made by humans or machines. Existing conduct obligations require insurers to treat customers fairly and to provide clear reasons for decisions. The challenge lies in ensuring that AI-generated outcomes meet these standards in practice.
One of the central regulatory concerns is human oversight. Most insurers maintain that automated claims systems operate within defined thresholds. Claims that fall outside standard parameters are escalated to human assessors. However, regulators are increasingly interested in how those thresholds are set, how often overrides occur and whether staff are empowered to challenge algorithmic outputs.
There is also the issue of training data. Claims automation systems learn from historical claims outcomes, which may reflect past practices that are no longer appropriate. If historical data contains systemic biases or errors, these can be amplified through automation. Regulators are asking insurers to demonstrate how training data is curated, tested and updated to reflect current standards.
Fraud detection adds another layer of complexity. AI systems are highly effective at identifying anomalous patterns that may indicate fraudulent behaviour. While this can protect insurers and honest policyholders, false positives can result in legitimate claims being delayed or denied. Heightened fraud scrutiny should not translate into an adversarial claims experience for ordinary customers.
Legal experts note that automated claims decisioning also raises questions under administrative and consumer law. If a claim is denied based on an algorithmic assessment, customers must still have access to meaningful review mechanisms. This includes the ability to have their case reconsidered by a human decision-maker who can exercise judgement beyond the confines of the model.
Insurers emphasise that automation is not intended to remove human judgement but to allocate it more efficiently. By allowing machines to handle routine claims, human assessors can focus on complex or sensitive cases. Some insurers report that this has improved staff morale and reduced burnout in claims teams.
Nevertheless, the perception gap remains significant. Customers often do not distinguish between automated and human decisions. When outcomes are unfavourable, trust in the insurer can erode quickly. Transparency around the use of AI in claims processing is therefore becoming a focal point of regulatory dialogue.
International experience is informing local policy discussions. Regulators in Australia and the United Kingdom have issued guidance emphasising the need for explainability, audit trails and human review in automated claims systems. New Zealand regulators are studying these developments as they consider whether additional guidance specific to insurance claims automation is warranted.
As claims automation becomes more sophisticated, the regulatory question is shifting from whether AI should be used to how it should be governed. Insurers that can demonstrate robust oversight, clear communication and fair review processes are likely to face less regulatory friction. Those that cannot may find that efficiency gains are outweighed by reputational and compliance risks.
Today, claims automation stands at a crossroads. It offers insurers a powerful tool to improve efficiency and customer experience. At the same time, it tests the foundational promise of insurance itself, which is to provide certainty and fairness at moments of vulnerability. How regulators, insurers and consumers navigate this tension will shape the next phase of AI integration in New Zealand’s insurance sector.

