Artificial intelligence is no longer a distant prospect for New Zealand’s insurance industry. It has begun to move decisively from the margins of data analytics into the core of underwriting, a domain long dominated by actuarial science, professional judgement and conservative assumptions. What was once the exclusive preserve of trained actuaries poring over mortality tables, claims histories and probabilistic models is now being supplemented by algorithms capable of processing vast datasets at speed and scale. This transition is reshaping how insurers classify risk, price premiums and assess portfolios, even as it raises searching questions about governance, accountability and trust.
At an operational level, insurers point to tangible benefits. Machine learning models can detect patterns in claims behaviour that might elude traditional methods, refine segmentation of customers and update risk assessments in near real time. Tasks such as initial risk screening, fraud flagging and premium optimisation are increasingly automated, improving efficiency and consistency. For a sector grappling with cost pressures, climate-related losses and rising customer expectations, these gains are difficult to ignore. AI, proponents argue, allows actuaries to spend less time on routine calculations and more on complex, judgement-heavy risks where human expertise remains indispensable.
However, the deeper AI penetrates underwriting and pricing, the closer it draws regulatory attention. In insurance, underwriting decisions are not merely commercial choices. They affect solvency, capital adequacy and the long-term stability of firms. The use of opaque or highly complex models challenges traditional supervisory approaches that rely on explainable assumptions and traceable decision paths. The Reserve Bank of New Zealand has repeatedly cautioned that while AI can enhance risk management, it can also amplify financial system vulnerabilities. Poorly designed or inadequately governed models may contribute to market volatility, enable sophisticated fraud, encourage aggressive profit-seeking behaviour or embed flawed assumptions that only surface under stress.
These concerns are not hypothetical. History offers examples of financial models that performed well in benign conditions but failed spectacularly during shocks. AI, particularly when trained on historical data, risks repeating this pattern at greater speed. If multiple insurers rely on similar data sources or vendor-provided models, systemic risks may emerge, undermining the diversity of judgement that has traditionally acted as a buffer in the insurance market.
Despite these anxieties, the prevailing view within the profession is that AI is an augmentation tool rather than a replacement for actuaries. Global actuarial bodies consistently emphasise that responsibility for decisions cannot be delegated to algorithms. Actuaries remain accountable for model selection, validation and interpretation. Guidance stresses the importance of thorough documentation, regular bias testing, ongoing monitoring and adherence to professional standards. In this framework, AI automates the mechanical and repetitive, while humans retain oversight of complex, novel or ethically sensitive risks.
The Financial Markets Authority has taken a similarly balanced stance. Its scrutiny focuses less on the technology itself and more on outcomes for customers and risks to firms. Areas of concern include errors in automated decisions, discriminatory impacts arising from biased data, breaches of data privacy and the danger of over-reliance on systems that may not be fully understood by senior management. The FMA has made it clear that firms remain responsible for ensuring fair treatment of customers, regardless of whether decisions are made by people or machines.
Explainability and bias sit at the heart of these debates. AI systems trained on historical data may inadvertently perpetuate existing inequities, whether linked to socio-economic status, geography or other proxies. At the same time, supporters argue that algorithms, if properly governed, can apply rules more consistently than humans and reduce the subjective biases that sometimes creep into manual underwriting. The distinction lies not in the technology but in the robustness of governance frameworks, including clear accountability, diverse training data and mechanisms for challenge and review.
New Zealand’s Privacy Act adds another layer of obligation. Automated decisions that significantly affect individuals must be based on accurate, relevant and lawful data. The Privacy Commissioner has increasingly emphasised transparency in the use of AI, particularly as complaints rise about algorithmic decision-making across sectors. Customers, regulators and the public are demanding to know not only what decisions are made, but how and why they are reached.
International developments are shaping local thinking. Frameworks emerging from the United Kingdom and the European Union, with their focus on risk-based regulation and explainability, are closely watched by New Zealand policymakers and industry leaders. Domestically, initiatives such as the Algorithm Charter reflect a growing recognition that trust in AI requires shared principles, not just technical capability.
A 2025 industry survey underscores the urgency of the issue. AI governance was ranked as the top risk facing New Zealand insurers, ahead of even the suitability of existing regulation. Concerns ranged from unmanaged “shadow AI” deployed without oversight to the escalating sophistication of fraud enabled by artificial intelligence.
The message from the sector is clear. Innovation is essential, but without clear policies, strong governance and professional accountability, the promise of AI in underwriting could quickly become a liability.
In this evolving landscape, the transition from actuaries to algorithms is less a handover than a partnership. The future of New Zealand insurance underwriting will likely be defined by how effectively human judgement and artificial intelligence are integrated, and how responsibly that integration is governed.

