Insurance AI Governance

AI Governance for New Zealand Insurance

New Zealand has no insurance-specific AI regulation. That does not mean no one is paying attention. The FMA expects fair conduct. The Privacy Act 2020 covers automated decisions about individuals. The Fair Trading Act applies to every AI-generated quote and recommendation. And global regulation is heading in one direction. The insurers who build governance now will not be scrambling later.

Financial Services Expertise
Insurance AI Governance Dashboard for New Zealand Insurers

The Absence of Insurance AI Regulation Is Not the Absence of Risk

Unlike the EU AI Act or emerging US state laws, New Zealand has no prescriptive AI requirements for insurance. But that regulatory gap creates its own set of problems for insurers adopting AI.

No clear standard to follow

Without specific AI rules, insurers are left to interpret existing legislation on their own. What counts as adequate governance? What does the FMA consider sufficient? Most insurers are guessing, and that guessing creates liability.

Global regulation is coming

The EU AI Act classifies insurance underwriting and credit scoring as high-risk. US states are passing algorithmic fairness laws. NZ insurers operating internationally, or using international AI vendors, will feel these pressures regardless.

Consumer trust is fragile

New Zealanders already have low trust in insurers. Add opaque AI making pricing and claims decisions, and a single public controversy could trigger both regulatory action and customer exodus. Proactive governance is reputation insurance.

Where insurance AI governance matters most

Not all insurance AI carries the same risk. The systems that affect individual people -- what they pay, whether their claim is approved, whether they are flagged as suspicious -- need the most rigorous oversight.

Highest risk

Algorithmic underwriting

AI that decides who gets cover and at what price is making decisions that shape people's financial security. Proxy discrimination is the central risk: models trained on postcode, occupation, or claims history can systematically disadvantage specific communities without explicitly using prohibited grounds.

Governance need: Fairness testing across demographic groups, documented rationale for risk factors, human review for decline decisions, regular bias audits of underwriting models.

Highest risk

Premium-setting algorithms

Dynamic pricing models that adjust premiums based on behavioural data, telematics, or external datasets can produce outcomes that are actuarially sound but socially unfair. The line between legitimate risk differentiation and unfair discrimination is not always obvious.

Governance need: Transparency about pricing factors, impact analysis on vulnerable groups, documented justification for data sources, mechanisms for customers to query premium calculations.

High risk

Claims automation and triage

Straight-through processing accelerates simple claims. But when AI triages complex claims, assesses damage from photographs, or decides which claims get fast-tracked versus flagged for investigation, the stakes for individual claimants are significant. A wrongly delayed claim can cause genuine hardship.

Governance need: Clear escalation paths to human assessors, monitoring of claim resolution times by triage category, transparency about automated decision criteria, appeal processes that are accessible.

High risk

Fraud detection

Pattern recognition systems flag suspicious claims for investigation. False positives subject legitimate claimants to invasive scrutiny, delayed payments, and reputational harm. If your fraud model disproportionately flags certain demographic groups, that is a fairness problem.

Governance need: False positive rate monitoring by demographic group, clear process for resolving flagged claims, accuracy benchmarking, regular model recalibration as fraud patterns evolve.

NZ legislation that already governs insurance AI

New Zealand does not have AI-specific insurance regulation. But existing laws cover the outcomes that matter. The question is whether your AI governance demonstrates compliance with each of these.

FMA

Conduct of Financial Institutions

Fair conduct obligations under CoFI

  • Fair treatment duty: Insurers must treat consumers fairly throughout the product lifecycle, including when AI drives decisions
  • Effective fair conduct programmes: AI systems that interact with customers or affect outcomes need to be covered by your fair conduct programme
  • Monitoring and reporting: The FMA expects you to monitor whether AI systems are producing fair outcomes and to report issues
  • Vulnerable consumers: AI systems must account for the needs of vulnerable consumers, not just optimise for efficiency

Privacy Act 2020

Information privacy principles

  • Collection limitation: You can only collect personal information for AI models if there is a lawful purpose directly connected to your business functions
  • Data accuracy: Training data and inference inputs must be sufficiently accurate for the decisions being made about individuals
  • Access and correction: Individuals can request access to their personal information held in AI systems and correct inaccuracies
  • Overseas disclosure: If your AI vendor processes data offshore, you need adequate safeguards and the individual must be informed

Fair Trading Act 1986

Misleading conduct and unfair practices

  • Misleading representations: AI-generated quotes, policy recommendations, and communications must not mislead consumers about coverage or pricing
  • Unfair contract terms: AI-driven policy terms that create significant imbalance in rights and obligations may be challenged
  • Substantiation: If you market AI-enhanced accuracy or personalisation, those claims must be supportable

Human Rights Act 1993

Prohibited grounds of discrimination

  • Insurance exception: Section 48 allows differentiation based on actuarial data, but this exception has limits and must be based on reasonable data
  • Proxy discrimination: Even without using prohibited attributes directly, AI models can produce discriminatory outcomes through correlated features
  • Burden of proof: If a discrimination complaint is filed, the insurer must demonstrate the AI decision was based on legitimate actuarial or statistical data

Common questions about insurance AI governance in NZ

If there are no AI-specific insurance regulations in NZ, why invest in governance now?

Three reasons. First, existing legislation already applies: the FMA's fair conduct obligations, the Privacy Act 2020, the Fair Trading Act, and the Human Rights Act all cover the outcomes of AI decisions even if they do not mention AI by name. Second, global regulation is moving fast, and NZ insurers using international vendors or operating across borders will be affected. Third, retroactive compliance is far more expensive than building governance into your AI programme from the start.

What does the FMA expect from insurers using AI?

The FMA has not issued AI-specific guidance for insurers, but the Conduct of Financial Institutions (CoFI) regime requires fair conduct programmes that cover all customer interactions and outcomes. If AI is making or influencing decisions about customers, it falls within scope. The FMA expects insurers to monitor customer outcomes, address systemic issues, and treat vulnerable consumers appropriately -- regardless of whether a human or algorithm is making the decision.

How does the Fair Trading Act apply to insurance AI?

The Fair Trading Act prohibits misleading and deceptive conduct in trade. For insurers using AI, this means AI-generated quotes must accurately reflect actual pricing, chatbot responses must not misrepresent policy terms, and marketing claims about AI-driven personalisation or accuracy must be substantiated. If a customer is misled by an AI system -- even unintentionally -- the insurer is liable.

What Privacy Act 2020 obligations apply to our AI systems?

The information privacy principles apply to all personal information your AI systems collect, use, and store. Key obligations include: collecting only what you need for a lawful purpose, ensuring data accuracy for the decisions being made, giving individuals access to their information on request, and putting safeguards in place when personal data is sent offshore to AI vendors. If your AI makes automated decisions about individuals, you should be able to explain the basis for those decisions.

How should we prepare for future AI regulation?

Build governance that is principles-based and adaptable. Start with a complete inventory of your AI systems and their risk profiles. Implement fairness testing for high-impact models. Document decision-making processes for underwriting and claims AI. Establish human oversight mechanisms. These steps align with every major AI governance framework globally and will position your organisation to comply with whatever specific requirements NZ introduces.

Can our underwriting AI legally differentiate based on risk factors that correlate with ethnicity or gender?

Section 48 of the Human Rights Act allows insurers to differentiate based on actuarial or statistical data that is reasonable in the circumstances. However, this exception is narrower than many insurers assume. If your AI model uses proxy variables that correlate with prohibited grounds and you cannot demonstrate the differentiation is based on legitimate actuarial data, you face discrimination risk. The safest approach is regular bias audits that test model outputs across demographic groups and document the actuarial justification for each risk factor.

Insurance AI governance before the regulator comes knocking

We will map your AI systems against current NZ legislation, identify the governance gaps, and build a programme that protects your policyholders and your licence. Start with a no-obligation assessment.

View All Services