Consumer Trust, Data Governance Under Spotlight in NZ

As artificial intelligence becomes embedded across New Zealand’s insurance sector, consumer trust has emerged as a defining issue shaping how far and how fast automation can advance. While insurers emphasise efficiency gains and improved risk accuracy, policyholders and advocacy groups are increasingly focused on how personal data is collected, interpreted, and retained within AI-driven insurance systems.

Over the past few months, insurers have expanded the range of data points feeding into pricing, claims, and customer engagement models. Beyond traditional policy and claims histories, AI systems now incorporate behavioural data, device-generated information, geolocation signals, and third-party datasets. This expansion has sharpened questions about consent, transparency, and proportionality.

 
The Office of the Privacy Commissioner has noted a rise in inquiries related to automated decision-making in financial services, including insurance. While no systemic breaches have been identified, officials have signalled that existing privacy frameworks may need clearer guidance when applied to machine-learning models that continuously evolve.  

One concern is the concept of informed consent. Insurance contracts typically disclose that data may be used for underwriting and claims assessment, but critics argue that these disclosures do not adequately explain how AI systems draw inferences or combine datasets. Consumers may consent to data use without fully understanding the implications for pricing or coverage eligibility.

 
Insurers counter that AI-driven analytics allow for more personalised products and faster service delivery. Claims processing, in particular, has benefited from automation, with some insurers reporting settlement times reduced from weeks to days for straightforward cases. From an operational perspective, AI reduces administrative overheads and improves fraud detection.    

However, the perception of constant monitoring has unsettled some customers. Telematics-based motor insurance and smart-home-linked property policies exemplify this tension. While these products reward low-risk behaviour, they also raise concerns about surveillance and data security.

Consumer advocates have called for clearer opt-out mechanisms and stronger assurances that data collected for one purpose will not be repurposed without explicit permission. There is also growing demand for meaningful explanations when AI-driven systems influence premium increases or claims outcomes.

 
Insurers have responded by strengthening internal data governance frameworks. Many firms have appointed dedicated data ethics committees to oversee how customer information is used across AI models. These committees review data sources, retention policies, and third-party sharing arrangements to ensure alignment with both legal obligations and consumer expectations.      

Another area of focus is data minimisation. Rather than collecting as much information as possible, insurers are being encouraged to justify why each data element is necessary. This approach reduces exposure to privacy risks while reinforcing trust.

From a regulatory standpoint, authorities are exploring whether additional disclosure requirements are needed for AI-driven insurance products. These could include plain-language summaries explaining how automated systems influence underwriting and claims decisions. While not yet mandated, such disclosures are increasingly viewed as best practice.

Data security remains a parallel concern. As insurers aggregate larger datasets to power AI models, they become more attractive targets for cyberattacks. The potential fallout from a data breach extends beyond financial loss to reputational damage and regulatory penalties. Insurers are therefore investing heavily in cybersecurity infrastructure and incident response planning.

 
Trust issues also intersect with cultural considerations. Māori data sovereignty principles emphasise collective ownership and stewardship of data related to Māori communities. Insurers operating in New Zealand must navigate these expectations carefully, particularly when AI models use geographic or demographic indicators that may disproportionately affect certain groups.      

Some insurers are beginning to engage directly with consumer groups to test perceptions of AI-driven products. These consultations aim to identify areas where transparency can be improved and misconceptions addressed before they escalate into formal disputes.

Industry leaders acknowledge that trust is not static. A single high-profile incident involving unfair pricing or data misuse could undermine confidence across the sector. As a result, insurers are increasingly viewing trust as a strategic asset rather than a compliance obligation.

The expansion of AI in insurance is unlikely to slow, given the competitive pressures and efficiency gains involved. However, its long-term success will depend on how effectively insurers balance innovation with respect for consumer rights and expectations.

 
Trusted insurance broker and financial advisor Fintrade says, as regulators, insurers, and consumers continue to negotiate this evolving landscape, data governance and trust will remain central to the conversation. The next phase of AI adoption in New Zealand insurance will be shaped not only by technological capability, but by the industry’s ability to demonstrate that automation serves policyholders as much as it serves balance sheets.      
Scroll to Top