Financial Services

AI Governance for New Zealand Financial Services

The majority of surveyed New Zealand financial firms already use artificial intelligence. The Financial Markets Authority has been researching AI across banking, insurance, asset management, and financial advice since 2024. The Reserve Bank of New Zealand published its financial stability analysis on AI. Neither regulator has issued prescriptive rules yet, but both expect your organisation to manage AI risks under existing obligations.

That window is closing. The businesses that build governance now will shape the standards. Our specialists help NZ financial institutions build proactive AI governance programmes that satisfy FMA, RBNZ, and Privacy Act 2020 requirements before prescriptive rules arrive.

See the Gap
AI Governance Dashboard for NZ Financial Services
9 in 10

NZ financial firms using AI (FMA 2024)

0

AI-specific compliance requirements mandated

Closing

Window to build governance before rules arrive

No Rules Does Not Mean No Risk

New Zealand has no AI Act. No prescriptive AI governance standard for financial services. That silence is often misread as safety. For organisations using AI in lending, insurance, trading, and customer onboarding, it is the opposite.

Existing obligations already apply

The Conduct of Financial Institutions (CoFI) Act 2022 requires fair conduct towards consumers. The Financial Markets Conduct Act includes Fair Dealing provisions. The Privacy Act 2020 governs automated decision-making through its 13 Information Privacy Principles. The Anti-Money Laundering and Countering Financing of Terrorism Act (AML/CFT) applies to AI-driven transaction monitoring and customer screening. These laws were not written for AI, but they apply to it. Every algorithmic credit decision, every automated insurance assessment, every chatbot interaction falls under existing compliance obligations.

Regulators are actively studying AI

The FMA conducted research across asset management, banking, financial advice, and insurance sectors in 2024. The RBNZ published "Rise of the Machines" analysing AI's impact on financial stability. Both regulators expect you to manage these risks under your existing obligations. They are building their supervisory understanding. When prescriptive rules arrive, the FMA and RBNZ will assess your organisation against what they find. Those with governance in place will adapt quickly, not scramble for compliance.

Vendor concentration is a systemic risk

NZ regulators have flagged vendor concentration as a key concern for financial stability. When multiple banks (ANZ NZ, BNZ, Westpac NZ, ASB, Kiwibank) rely on the same AI vendor or foundation model, a single failure cascades across the financial system. The RBNZ considers this a material stability concern that boards must actively manage. Your AI supply chain requires the same governance rigour as your traditional outsourcing arrangements.

Treaty obligations extend to algorithmic fairness

AI models in lending, insurance pricing, and credit assessment affect Māori and Pacific populations. Te Tiriti o Waitangi creates obligations around equitable outcomes that extend to automated decision-making. Māori data sovereignty principles require organisations to consider how data about Māori is collected, processed, and used in AI systems. Financial services businesses that ignore these considerations face both regulatory and reputational risk unique to Aotearoa New Zealand.

What NZ Regulators Expect Today

There is no AI rulebook. But there are clear signals from the FMA, RBNZ, and Privacy Commissioner about what responsible governance looks like in financial services. We help your team navigate these expectations.

FMA

Financial Markets Authority

Conduct Regulator

The FMA's position is clear: financial innovations must be introduced responsibly. AI does not get a special exemption from fair conduct obligations. The FMA has committed to ensuring responsible use across the sector.

CoFI Act Fair Conduct

Fair conduct obligations under the Conduct of Financial Institutions Act extend to AI-driven processes. If an algorithm treats customers unfairly, that is a CoFI breach regardless of whether a human was involved in the decision.

FMC Act Fair Dealing

Fair Dealing provisions prohibit misleading conduct. AI-generated financial communications, product recommendations, and marketing must meet the same standard as human-produced content under the Financial Markets Conduct Act.

Cross-Sector AI Research

The FMA has researched AI use across banking, insurance, asset management, and financial advice in New Zealand. This research informs future regulatory expectations and supervisory focus areas.

Customer Outcomes Focus

The FMA evaluates AI through the lens of customer outcomes. Does the system improve outcomes for consumers? Does it create risks of harm? That is the test organisations must prepare for.

RBNZ

Reserve Bank of New Zealand

Prudential Regulator

The RBNZ expects regulated entities to manage AI risks under existing prudential obligations. Their "Rise of the Machines" analysis identified systemic risks that AI introduces to financial stability in New Zealand.

Operational Resilience

AI systems are operational infrastructure. When they fail, services fail. The RBNZ expects banks and insurers to demonstrate that these failures will not disrupt critical financial services.

Vendor Concentration Risk

Multiple NZ banks using the same AI vendor or foundation model creates systemic concentration risk. The RBNZ has flagged this as a financial stability concern that institutions must actively manage through robust governance.

Model Risk Management

Models that affect credit decisions, capital calculations, or risk assessments require validation, monitoring, and governance. The RBNZ expects the same rigour applied to AI as to traditional quantitative models.

Market Distortion Risks

Herding behaviour from similar AI models, algorithmic pricing convergence, and correlated trading strategies can distort markets. The RBNZ monitors these systemic effects across New Zealand's financial sector.

OPC

Privacy Commissioner

Data Protection

The Privacy Act 2020 governs how financial institutions collect, store, and use personal information in AI systems. Every credit model, every customer profile, every automated decision involves personal data subject to the 13 Information Privacy Principles.

Information Privacy Principles

The 13 IPPs apply to AI training data, inference inputs, and outputs. Purpose limitation, data quality, and retention rules constrain how these systems can process personal information under New Zealand law.

Automated Decision Transparency

Customers have the right to know when decisions affecting them are made by algorithms. Financial institutions must be able to explain how an automated system reached a decision about a specific individual.

Cross-Border Data Transfers

IPP 12 requires comparable privacy protections when personal data is disclosed offshore. AI vendors processing NZ customer data in overseas jurisdictions must meet these requirements, including any Consumer Data Right NZ developments.

AML/CFT Compliance

AI-driven transaction monitoring and customer due diligence must satisfy Anti-Money Laundering and Countering Financing of Terrorism Act requirements. Automated screening decisions require auditability and governance oversight.

Four Risks NZ Regulators Have Flagged

The FMA and RBNZ have identified specific artificial intelligence risks in the New Zealand financial system. These are not hypotheticals. They are the areas regulators are watching and organisations must address proactively.

Errors in AI Systems

AI models make mistakes. In financial services, those mistakes affect loan approvals, insurance claims, and investment recommendations. Regulators want to know how you detect errors, how quickly your team responds, and what harm mitigation is in place for New Zealand customers.

Data Privacy Exposure

AI systems ingest vast amounts of customer data. Training data leakage, inference attacks, and inadequate data minimisation create privacy risks under the Privacy Act 2020 that both the Privacy Commissioner and FMA monitor closely. Compliance with the 13 Information Privacy Principles is not optional for AI systems.

Market Distortions

When multiple institutions deploy similar models, their decisions can converge. Correlated lending, synchronised pricing, and herding behaviour in investment create systemic risks the RBNZ is actively monitoring across New Zealand's financial sector.

Vendor Concentration

A small number of AI vendors serve most NZ banks and insurers. If ANZ NZ, BNZ, Westpac NZ, ASB, and Kiwibank all depend on the same provider, one vendor failure impacts the entire system. The RBNZ considers this a material stability concern requiring active board oversight.

What We Deliver

We do not sell frameworks off the shelf. We build governance programmes tailored to how your institution actually uses AI, mapped against the regulatory expectations that actually apply to New Zealand financial services organisations.

How We Work with NZ Financial Institutions

We start with where you are, not where a template says you should be. We understand the unique challenges that New Zealand's financial services landscape creates for AI governance.

01

AI Inventory and Exposure Mapping

Most institutions do not have a complete picture of where AI is deployed. We catalogue every AI system, model, and vendor relationship. We map each one to FMA, RBNZ, Privacy Act 2020, and AML/CFT obligations. Your team gets a clear view of actual regulatory exposure.

02

Governance Programme Design

We design governance that fits your organisational structure. Policies, approval processes, risk classification, monitoring cadence, and board reporting, all calibrated to the size and complexity of your AI operations. We incorporate Treaty of Waitangi considerations for fairness in algorithmic decisions affecting Māori and Pacific populations, aligned with OECD AI Principles adopted in New Zealand's National AI Strategy.

03

Regulatory Readiness and Documentation

When the FMA or RBNZ asks how you govern AI, you need documentation that answers clearly. We prepare the risk assessments, policy documents, compliance evidence, and board papers you need, ready before the question is asked.

Common Questions from NZ Financial Institutions

The FMA has not mandated AI governance. Why should we invest now?

Because the FMA has explicitly stated it expects financial innovations to be introduced responsibly. It has conducted cross-sector AI research and is building its supervisory approach. The OECD AI Principles that underpin New Zealand's National AI Strategy set clear expectations for transparency, accountability, and fairness. Institutions that wait for prescriptive rules will face compressed timelines and higher costs. Those that build governance proactively will influence standards and adapt quickly when rules arrive.

Does the CoFI Act actually apply to artificial intelligence?

The Conduct of Financial Institutions Act requires fair conduct towards consumers of financial services. It does not mention AI specifically, but its obligations are technology-neutral. If an algorithm produces unfair outcomes for customers, the institution is responsible under CoFI regardless of whether the decision was made by a person or a model. The FMA has confirmed this interpretation. We help organisations map CoFI obligations to specific AI use cases.

What does the RBNZ expect from us on artificial intelligence?

The Reserve Bank of New Zealand expects regulated entities to manage AI risks under their existing prudential obligations. This means operational resilience for AI-dependent systems, model risk management for AI models, and vendor risk management for AI providers. The RBNZ's "Rise of the Machines" analysis made clear that AI is a financial stability concern, not just an operational efficiency tool. Directors face liability under the Companies Act 1993 for failures to manage these risks adequately.

How do AML/CFT obligations interact with AI governance?

The Anti-Money Laundering and Countering Financing of Terrorism Act requires organisations to conduct customer due diligence, monitor transactions, and report suspicious activity. When AI systems perform these functions, they must be governed with the same rigour as manual processes. AI-driven transaction monitoring must be explainable, auditable, and subject to regular review. False positive rates and false negative rates both carry regulatory consequences, making governance essential for compliance.

We are a smaller institution. Do we really need formal AI governance?

The scope should match your AI footprint, but the answer is yes. Even Kiwibank-scale institutions use AI for credit decisioning, fraud detection, and customer service. If AI affects customer outcomes, you need governance around it. We scale our approach to your size. A regional insurer needs different governance than a major bank, and we scale our approach accordingly for New Zealand organisations of all sizes.

How do you handle vendor concentration risk specifically?

Our specialists map your complete AI vendor ecosystem, identify single points of failure, and assess concentration at every layer: foundation models, cloud infrastructure, data providers, and application vendors. We then build contingency plans and diversification strategies that satisfy RBNZ prudential expectations without requiring you to abandon vendors that work well. The result is a vendor ecosystem that is resilient, diversified, and well-governed.

AI Governance for Your Financial Services Organisation

Schedule a conversation about your institution's AI footprint, the regulatory obligations that apply today under the FMA, RBNZ, Privacy Act 2020, and AML/CFT Act, and how to build governance that prepares your New Zealand organisation for what comes next.

Explore All NZ Services