For much of its history, the insurance industry’s relationship with customers has been mediated through agents, call centres and branch offices. In New Zealand, that interface is being rapidly reconfigured as artificial intelligence powered chatbots and voice agents take on a central role in customer engagement. From policy queries and renewals to claims updates and basic advisory interactions, AI systems are increasingly the first and sometimes only point of contact between insurers and policyholders. This shift is prompting closer scrutiny from regulators concerned about disclosure, accountability and the fine line between information and advice.
Over the past year, several major insurers operating in New Zealand have expanded their use of conversational AI across websites, mobile apps and call centres. These systems rely on natural language processing to interpret customer queries and generate responses in real time. Insurers say the technology allows them to provide round-the-clock service, reduce wait times and handle high volumes of routine interactions more efficiently.
Customer engagement teams report that a significant proportion of inbound queries are now resolved without human intervention. Simple requests such as policy wording clarification, premium payment schedules and coverage confirmation are increasingly handled entirely by AI systems. In some cases, voice agents are also being used to guide customers through renewal processes or to collect preliminary information before escalating complex cases to human staff.
The expansion of AI-driven engagement is reshaping cost structures and staffing models. Insurers argue that automation frees human agents to focus on high-value interactions that require judgement or empathy. However, the growing reliance on synthetic interfaces raises important regulatory and ethical questions.
One of the primary concerns is disclosure. Regulators want customers to know when they are interacting with an automated system rather than a human representative. Transparency is seen as essential to maintaining trust, particularly when conversations involve financial products that can have long-term consequences. While many insurers disclose the use of AI in their digital channels, the clarity and prominence of such disclosures vary.
Another issue is the boundary between assistance and advice. Under New Zealand’s financial markets conduct regime, providing regulated financial advice triggers specific obligations around suitability, competence and accountability. AI systems are often positioned as informational tools, yet their responses can influence customer decisions about coverage levels, optional add-ons or policy changes.
Regulators are increasingly examining whether certain AI interactions cross into advisory territory. For example, when a chatbot suggests a particular coverage option based on a customer’s stated circumstances, questions arise as to whether this constitutes personalised advice. Insurers maintain that such systems operate within predefined scripts designed to avoid advisory recommendations. Nonetheless, as AI systems become more sophisticated, maintaining this distinction becomes more challenging.
Some argue that customers may not fully appreciate the limitations of AI interactions. There is concern that customers could place undue reliance on automated responses, assuming they carry the same authority as advice from a licensed adviser. This is particularly relevant for vulnerable customers or those with limited financial literacy.
Data use is another area of scrutiny. Conversational AI systems collect and analyse large volumes of customer data to improve performance and personalise responses. Regulators are examining how this data is stored, whether it is used beyond the immediate interaction and how consent is obtained. The Office of the Privacy Commissioner has signalled that automated customer engagement will remain a priority area for oversight, particularly where sensitive personal information is involved.
Insurers respond that robust governance frameworks are in place. Many have established internal controls to monitor AI interactions, including regular audits of chatbot responses and escalation protocols when queries fall outside permitted parameters. Some insurers are also incorporating feedback mechanisms that allow customers to flag unsatisfactory or confusing interactions.
International experience is informing regulatory thinking. In markets such as the United Kingdom and Australia, regulators have issued guidance emphasising that firms remain responsible for the outcomes of automated customer interactions. New Zealand authorities are considering whether similar guidance is needed to clarify expectations around AI-driven engagement.
From an industry perspective, the challenge is to harness the efficiency of AI without undermining customer trust. Insurers that rely too heavily on automation risk alienating customers who value human interaction, particularly during stressful events such as claims. Conversely, firms that fail to modernise may struggle to meet evolving service expectations.
Technology providers argue that AI systems can enhance, rather than diminish, customer experience when deployed responsibly. Advances in sentiment analysis and conversational design are enabling systems to recognise distress or confusion and route interactions to human agents when appropriate. Such features may help address concerns about empathy and appropriateness.
AI-driven customer engagement is likely to remain a focal point of regulatory dialogue in New Zealand’s insurance sector. The central question is not whether AI should be used, but how it should be governed to ensure clarity, fairness and accountability.
The insurance industry has long relied on trust as a foundational asset. As synthetic voices and chat interfaces become the new frontline, preserving that trust will depend on transparent practices, clear boundaries and a willingness to place consumer outcomes ahead of operational convenience.

