Artificial Intelligence Governance for New Zealand Insurance
Aotearoa New Zealand has no insurance-specific artificial intelligence regulation. That does not mean no one is paying attention. The FMA expects fair conduct from every organisation using AI. The Privacy Act 2020 covers automated decisions about individuals. The Fair Trading Act 1986 applies to every AI-generated quote and recommendation. The RBNZ holds insurers to solvency standards under the Insurance (Prudential Supervision) Act 2010.
Global regulation is heading in one direction. The businesses that build governance now will not be scrambling when compliance becomes mandatory. Our team helps insurers across New Zealand get ahead.
The Absence of Insurance AI Regulation Is Not the Absence of Risk
Unlike the EU AI Act or emerging US state laws, Aotearoa New Zealand has no prescriptive AI requirements for insurance. But that regulatory gap creates its own set of risk management challenges for organisations adopting AI. Proactive governance is the difference between readiness and exposure.
No clear standard to follow
Without specific AI rules, insurers are left to interpret existing legislation on their own. What counts as adequate governance? What does the FMA consider sufficient? What do RBNZ solvency standards require when AI drives underwriting decisions? Most organisations are guessing, and that guessing creates liability. Our consultants help insurers build governance grounded in what NZ regulators actually expect.
Global regulation is coming
The EU AI Act classifies insurance underwriting and credit scoring as high-risk. US states are passing algorithmic fairness laws. NZ insurers operating internationally, or using international AI vendors, will feel these pressures regardless.
Consumer trust is fragile
New Zealanders already have low trust in insurers. Add opaque AI making pricing and claims decisions, and a single public controversy could trigger both regulatory action and customer exodus. For businesses in Aotearoa's insurance market, proactive governance is reputation insurance that demonstrates the responsible practices the OECD AI Principles demand.
Where insurance artificial intelligence governance matters most
Not all insurance AI carries the same risk. The systems that affect individual people, including what they pay, whether their claim is approved, and whether they are flagged as suspicious, need the most rigorous oversight. Our team helps organisations prioritise governance where the stakes are highest.
Algorithmic underwriting
AI that decides who gets cover and at what price is making decisions that shape people's financial security. Proxy discrimination is the central risk: models trained on postcode, occupation, or claims history can systematically disadvantage specific communities without explicitly using prohibited grounds.
Governance need: Fairness testing across demographic groups, documented rationale for risk factors, human review for decline decisions, regular bias audits of underwriting models.
Premium-setting algorithms
Dynamic pricing models that adjust premiums based on behavioural data, telematics, or external datasets can produce outcomes that are actuarially sound but socially unfair. The line between legitimate risk differentiation and unfair discrimination is not always obvious.
Governance need: Transparency about pricing factors, impact analysis on vulnerable groups, documented justification for data sources, mechanisms for customers to query premium calculations.
Claims automation and triage
Straight-through processing accelerates simple claims. But when AI triages complex claims, assesses damage from photographs, or decides which claims get fast-tracked versus flagged for investigation, the stakes for individual claimants are significant. A wrongly delayed claim can cause genuine hardship.
Governance need: Clear escalation paths to human assessors, monitoring of claim resolution times by triage category, transparency about automated decision criteria, appeal processes that are accessible.
Fraud detection
Pattern recognition systems flag suspicious claims for investigation. False positives subject legitimate claimants to invasive scrutiny, delayed payments, and reputational harm. If your fraud model disproportionately flags certain demographic groups, that is a fairness problem.
Governance need: False positive rate monitoring by demographic group, clear process for resolving flagged claims, accuracy benchmarking, regular model recalibration as fraud patterns evolve.
NZ legislation that already governs insurance AI
Aotearoa New Zealand does not have AI-specific insurance regulation. But existing laws cover the outcomes that matter. The question is whether your governance demonstrates compliance with each of these laws, and whether your organisation can evidence that compliance when the FMA or RBNZ asks.
Conduct of Financial Institutions
Fair conduct obligations under CoFI
- • Fair treatment duty: Insurers must treat consumers fairly throughout the product lifecycle, including when AI drives decisions
- • Effective fair conduct programmes: AI systems that interact with customers or affect outcomes need to be covered by your fair conduct programme
- • Monitoring and reporting: The FMA expects you to monitor whether AI systems are producing fair outcomes and to report issues
- • Vulnerable consumers: AI systems must account for the needs of vulnerable consumers, not just optimise for efficiency
Privacy Act 2020
Information privacy principles
- • Collection limitation: You can only collect personal information for AI models if there is a lawful purpose directly connected to your business functions
- • Data accuracy: Training data and inference inputs must be sufficiently accurate for the decisions being made about individuals
- • Access and correction: Individuals can request access to their personal information held in AI systems and correct inaccuracies
- • Overseas disclosure: If your AI vendor processes data offshore, you need adequate safeguards and the individual must be informed
Fair Trading Act 1986
Misleading conduct and unfair practices
- • Misleading representations: AI-generated quotes, policy recommendations, and communications must not mislead consumers about coverage or pricing
- • Unfair contract terms: AI-driven policy terms that create significant imbalance in rights and obligations may be challenged
- • Substantiation: If you market AI-enhanced accuracy or personalisation, those claims must be supportable
Human Rights Act 1993
Prohibited grounds of discrimination
- • Insurance exception: Section 48 allows differentiation based on actuarial data, but this exception has limits and must be based on reasonable data
- • Proxy discrimination: Even without using prohibited attributes directly, AI models can produce discriminatory outcomes through correlated features
- • Burden of proof: If a discrimination complaint is filed, the insurer must demonstrate the AI decision was based on legitimate actuarial or statistical data
Governance built for NZ insurance
Our team helps insurers build governance that maps to the legislation that exists today and positions organisations for whatever regulation comes next. Grounded in Aotearoa's regulatory reality, not generic frameworks adapted from overseas.
AI Governance Programmes
Governance designed around insurance-specific AI use cases. Approval workflows for new models, ongoing monitoring protocols, and board reporting that covers fairness, privacy, and conduct obligations. Our consultants build programmes that satisfy FMA expectations and RBNZ solvency standards while enabling responsible adoption in your organisation.
Learn more →Underwriting Fairness Audits
Independent assessment of your underwriting and pricing models for bias, proxy discrimination, and unfair outcomes. Our team tests across demographic groups, including Māori and Pacific populations, and documents findings in a format that demonstrates due diligence to the FMA. This risk management approach aligns with the OECD AI Principles New Zealand has endorsed.
Learn more →Privacy and Data Compliance
Privacy Act 2020 compliance for AI systems that process customer data across your organisation. Collection limitations, accuracy obligations, cross-border transfer safeguards, and access request handling for AI-driven decisions. We ensure your business meets the standards the Office of the Privacy Commissioner expects.
Learn more →Common questions about insurance AI governance in NZ
If there are no AI-specific insurance regulations in NZ, why invest in governance now?
Three reasons. First, existing legislation already applies: the FMA's fair conduct obligations, the Privacy Act 2020, the Fair Trading Act 1986, the Human Rights Act, and RBNZ solvency standards under the Insurance (Prudential Supervision) Act 2010 all cover the outcomes of AI decisions even if they do not mention the technology by name. Second, global regulation is moving fast, and NZ insurers using international vendors or operating across borders will be affected. Third, retroactive compliance is far more expensive than building governance into your AI programme from the start. Our consultants help organisations across Aotearoa build proactive governance that reduces cost and risk.
What does the FMA expect from insurers using AI?
The FMA has not issued AI-specific guidance for insurers, but the Conduct of Financial Institutions (CoFI) regime requires fair conduct programmes that cover all customer interactions and outcomes. If AI is making or influencing decisions about customers, it falls within scope. The FMA expects insurers to monitor customer outcomes, address systemic issues, and treat vulnerable consumers appropriately, regardless of whether a human or algorithm is making the decision.
How does the Fair Trading Act apply to insurance AI?
The Fair Trading Act prohibits misleading and deceptive conduct in trade. For insurers using AI, this means AI-generated quotes must accurately reflect actual pricing, chatbot responses must not misrepresent policy terms, and marketing claims about AI-driven personalisation or accuracy must be substantiated. If a customer is misled by an AI system, even unintentionally, the insurer is liable.
What Privacy Act 2020 obligations apply to our AI systems?
The information privacy principles apply to all personal information your AI systems collect, use, and store. Key obligations include: collecting only what you need for a lawful purpose, ensuring data accuracy for the decisions being made, giving individuals access to their information on request, and putting safeguards in place when personal data is sent offshore to AI vendors. If your AI makes automated decisions about individuals, you should be able to explain the basis for those decisions.
How should we prepare for future AI regulation?
Build governance that is principles-based and adaptable. Start with a complete inventory of your AI systems and their risk profiles. Implement fairness testing for high-impact models. Document decision-making processes for underwriting and claims AI. Establish human oversight mechanisms aligned with the OECD AI Principles that New Zealand has endorsed. These steps align with every major AI governance framework globally and will position your organisation to comply with whatever specific requirements Aotearoa introduces. Our team builds programmes that are ready for today and adaptable for tomorrow.
Can our underwriting AI legally differentiate based on risk factors that correlate with ethnicity or gender?
Section 48 of the Human Rights Act allows insurers to differentiate based on actuarial or statistical data that is reasonable in the circumstances. However, this exception is narrower than many insurers assume. If your AI model uses proxy variables that correlate with prohibited grounds and you cannot demonstrate the differentiation is based on legitimate actuarial data, you face discrimination risk. The safest approach is regular bias audits that test model outputs across demographic groups and document the actuarial justification for each risk factor.
Insurance AI governance before the regulator comes knocking
Our consultants will map your AI systems against current NZ legislation, including FMA conduct obligations, RBNZ solvency standards, the Privacy Act 2020, and the Fair Trading Act 1986. We will identify the governance gaps and build a compliance programme that protects your policyholders and your licence. Start with a no-obligation assessment.