As artificial intelligence becomes embedded across underwriting, claims management and customer engagement, New Zealand’s insurance sector is approaching a regulatory inflection point. What began as a series of isolated technology deployments has evolved into a systemic shift in how insurers assess risk, interact with customers and make decisions with financial consequences. This has intensified calls for a coherent governance framework that addresses the specific risks of AI in insurance while preserving room for innovation.
Until now, New Zealand regulators have relied largely on existing prudential, conduct and privacy regimes to oversee AI adoption in insurance. Insurers remain subject to solvency requirements, fair conduct obligations and data protection laws regardless of the tools they use. Regulators argue that this principles-based approach offers flexibility and avoids prematurely constraining innovation.
However, as AI systems move from support functions into core decision-making roles, industry participants and policymakers alike are questioning whether current frameworks are sufficient. Unlike traditional systems, AI models can evolve over time, respond unpredictably to new data and generate outcomes that are difficult to trace back to specific rules or assumptions.
Officials at the Reserve Bank of New Zealand have acknowledged that widespread AI adoption introduces new forms of model risk that are not fully addressed by existing supervisory tools. From a prudential perspective, there is concern that correlated model behaviour across insurers could amplify systemic risks, particularly during periods of economic stress or natural catastrophe.
The Financial Markets Authority faces related challenges on the conduct side. Automated decisioning affects how customers are treated, how products are distributed and how complaints are resolved. Regulators must ensure that the use of AI does not undermine principles of fairness, transparency and accountability that underpin confidence in financial markets.
Industry bodies are increasingly vocal in calling for regulatory clarity. Insurers argue that uncertainty creates compliance risk and discourages long-term investment. Without clear expectations, firms may adopt inconsistent governance practices, increasing the likelihood of regulatory intervention after problems emerge.
Some stakeholders push for sector-specific guidance rather than broad AI legislation. Insurance, they argue, presents unique risks due to its reliance on probabilistic decision-making and its role in financial stability. Targeted guidance could address issues such as explainability standards, human oversight requirements and audit obligations tailored to insurance operations.
Others caution against over-regulation. Technology providers and insurtech firms warn that rigid rules could slow innovation and disadvantage New Zealand relative to larger markets. They point to the benefits AI has already delivered in reducing costs, improving service and expanding access to insurance products.
International developments are shaping the domestic debate. The European Union’s proposed AI regulatory framework classifies insurance-related AI systems as high risk, subjecting them to strict governance requirements. The United Kingdom has opted for a more decentralised approach, empowering sector regulators to issue guidance aligned with overarching principles. Australia is also reviewing its approach, with insurance flagged as a priority sector.
New Zealand policymakers are closely watching these models. There is recognition that alignment with international standards can reduce compliance burdens for global insurers while enhancing regulatory credibility. At the same time, New Zealand’s relatively small and concentrated insurance market presents distinct supervisory dynamics.
The role of coordination across regulators is also under discussion. AI governance cuts across prudential supervision, conduct regulation, privacy protection and competition policy. Ensuring consistent oversight may require closer collaboration between agencies, potentially through joint guidance or shared supervisory frameworks.
Consumer activists emphasise that any AI rulebook must prioritise customer outcomes. They argue that automated systems should not obscure accountability or limit access to redress. Clear rules on disclosure, review rights and human intervention are seen as essential to maintaining trust.
There is also growing interest in proactive regulatory tools. Some policymakers are exploring the use of regulatory sandboxes or pilot regimes that allow insurers to test AI applications under enhanced supervision. Such approaches could enable regulators to learn alongside industry while identifying risks early.
Despite the momentum, regulators remain cautious about moving too quickly. Crafting effective AI governance requires technical expertise, industry consultation and careful calibration. Premature or poorly designed rules could create loopholes or unintended consequences. For now, New Zealand’s approach remains evolutionary rather than revolutionary. Regulators are engaging with insurers through supervisory dialogue, thematic reviews and informal guidance. However, the direction of travel is clear. As AI becomes integral to insurance operations, expectations around governance, documentation and accountability are likely to harden.
The question is no longer whether AI warrants regulatory attention, but what form that attention should take. The choices made over the next year will shape how innovation unfolds and how risks are managed in a sector that plays a critical role in economic resilience.
A well-calibrated AI insurance rulebook could offer certainty for insurers, protection for consumers and confidence for regulators. Achieving that balance will require careful design, sustained dialogue and a recognition that technology has changed not just how insurance is delivered, but how it must be governed.

