AI Governance for New Zealand Financial Services
Nine out of ten New Zealand financial firms already use AI. The FMA has been researching AI across banking, insurance, asset management, and financial advice since 2024. The RBNZ published its financial stability analysis on AI. Neither regulator has issued prescriptive rules yet.
That window is closing. The firms that build governance now will shape the standards. The rest will scramble to comply.
NZ financial firms using AI (FMA 2024)
AI-specific compliance requirements mandated
Window to build governance before rules arrive
No Rules Doesn't Mean No Risk
New Zealand has no AI Act. No prescriptive AI governance standard for financial services. That silence is often misread as safety. It is the opposite.
Existing obligations already apply
The CoFI Act 2022 requires fair conduct towards consumers. The FMC Act includes Fair Dealing provisions. The Privacy Act 2020 governs automated decision-making. These laws were not written for AI, but they apply to it. Every algorithmic credit decision, every automated insurance assessment, every chatbot interaction falls under existing obligations whether your governance recognises it or not.
Regulators are actively studying AI
The FMA conducted research across asset management, banking, financial advice, and insurance sectors in 2024. The RBNZ published "Rise of the Machines" analysing AI's impact on financial stability. Both regulators expect you to manage AI risks under your existing obligations. They are building their understanding. Prescriptive rules will follow.
Vendor concentration is a systemic risk
NZ regulators have flagged vendor concentration as a key concern. When multiple banks rely on the same AI vendor or foundation model, a single failure cascades across the financial system. If your AI supply chain mirrors your competitors', you share their risk. Neither the FMA nor the RBNZ will accept that as an excuse.
Proactive governance is a competitive advantage
When prescriptive rules arrive, the institutions that already have governance frameworks will adapt in weeks. Everyone else will spend months or years catching up. More immediately, strong AI governance builds trust with customers, boards, and regulators. In a market where trust differentiates banks, that matters.
What NZ Regulators Expect Today
There is no AI rulebook. But there are clear signals from the FMA, RBNZ, and Privacy Commissioner about what responsible AI looks like in financial services.
Financial Markets Authority
Conduct Regulator
The FMA's position is clear: financial innovations must be introduced responsibly. AI does not get a special exemption from fair conduct obligations.
CoFI Act Fair Conduct
Fair conduct obligations under CoFI extend to AI-driven processes. If an algorithm treats customers unfairly, that is a CoFI breach regardless of whether a human was involved.
FMC Act Fair Dealing
Fair Dealing provisions prohibit misleading conduct. AI-generated financial communications, product recommendations, and marketing must meet the same standard as human-produced content.
Cross-Sector AI Research
FMA has researched AI use across banking, insurance, asset management, and financial advice. This research informs future regulatory expectations. Expect targeted guidance.
Customer Outcomes Focus
The FMA evaluates AI through the lens of customer outcomes. Does the AI improve outcomes for consumers? Does it create risks of harm? That is the test.
Reserve Bank of New Zealand
Prudential Regulator
The RBNZ expects regulated entities to manage AI risks under existing prudential obligations. Their "Rise of the Machines" analysis identified systemic risks that AI introduces to financial stability.
Operational Resilience
AI systems are operational infrastructure. When they fail, services fail. The RBNZ expects banks and insurers to demonstrate that AI failures will not disrupt critical financial services.
Vendor Concentration Risk
Multiple NZ banks using the same AI vendor or foundation model creates systemic concentration risk. The RBNZ has flagged this as a financial stability concern that institutions must actively manage.
Model Risk Management
AI models that affect credit decisions, capital calculations, or risk assessments require validation, monitoring, and governance. The RBNZ expects the same rigour applied to traditional models.
Market Distortion Risks
Herding behaviour from similar AI models, algorithmic pricing convergence, and correlated trading strategies can distort markets. The RBNZ monitors these systemic effects.
Privacy Commissioner
Data Protection
The Privacy Act 2020 governs how financial institutions collect, store, and use personal information in AI systems. Every credit model, every customer profile, every automated decision involves personal data.
Information Privacy Principles
The 13 IPPs apply to AI training data, inference inputs, and outputs. Purpose limitation, data quality, and retention rules constrain how AI systems can process personal information.
Automated Decision Transparency
Customers have the right to know when decisions affecting them are made by algorithms. Financial institutions must be able to explain how AI reached a decision about a specific individual.
Four Risks NZ Regulators Have Flagged
The FMA and RBNZ have identified specific AI risks in the New Zealand financial system. These are not hypotheticals. They are the areas regulators are watching.
Errors in AI Systems
AI models make mistakes. In financial services, those mistakes affect loan approvals, insurance claims, and investment recommendations. Regulators want to know how you detect errors, how quickly you respond, and what harm mitigation is in place.
Data Privacy Exposure
AI systems ingest vast amounts of customer data. Training data leakage, inference attacks, and inadequate data minimisation create privacy risks that the Privacy Commissioner and FMA both monitor closely.
Market Distortions
When multiple institutions deploy similar AI models, their decisions can converge. Correlated lending, synchronised pricing, and herding behaviour in investment create systemic risks the RBNZ is actively monitoring.
Vendor Concentration
A small number of AI vendors serve most NZ banks and insurers. If ANZ NZ, BNZ, Westpac NZ, ASB, and Kiwibank all depend on the same provider, one vendor failure impacts the entire system. The RBNZ considers this a material stability concern.
What We Deliver
We do not sell frameworks off the shelf. We build governance programmes tailored to how your institution actually uses AI, mapped against the regulatory expectations that actually apply to you.
FMA-Aligned AI Risk Assessment
A thorough assessment of your AI systems against FMA conduct expectations, CoFI Act fair conduct obligations, and FMC Act Fair Dealing provisions. We identify which systems carry regulatory risk and which are low priority.
Learn more →CoFI Compliance Framework for AI
A governance framework that maps your AI applications to CoFI Act obligations. Fair conduct programmes, customer outcomes monitoring, and complaint handling processes designed to cover algorithmic decisions.
Learn more →Operational Resilience Plan for AI
An operational resilience plan that satisfies RBNZ expectations. Covers AI system availability, failure recovery, business continuity for AI-dependent processes, and incident response protocols.
Learn more →Vendor Concentration Risk Analysis
A detailed analysis of your AI vendor dependencies, single points of failure, and concentration risks. Includes diversification recommendations and contingency planning for vendor disruption scenarios.
Learn more →How We Work with NZ Financial Institutions
We start with where you are, not where a template says you should be.
AI Inventory and Exposure Mapping
Most institutions do not have a complete picture of where AI is deployed. We catalogue every AI system, model, and vendor. We map each one to FMA, RBNZ, and Privacy Act obligations. You get a clear view of your actual regulatory exposure.
Governance Programme Design
We design governance that fits your organisational structure. Policies, approval processes, risk classification, monitoring cadence, board reporting. Everything calibrated to the size and complexity of your AI operations, not a one-size-fits-all template.
Regulatory Readiness and Documentation
When the FMA or RBNZ asks how you govern AI, you need documentation that answers clearly. We prepare the risk assessments, policy documents, and compliance evidence you need. Ready before the question is asked.
Common Questions from NZ Financial Institutions
The FMA hasn't mandated AI governance. Why should we invest now?
Because the FMA has explicitly stated it expects financial innovations to be introduced responsibly. It has conducted cross-sector AI research and is building its supervisory approach. Institutions that wait for prescriptive rules will face compressed timelines and higher costs. Those that build governance proactively will influence standards and adapt quickly when rules arrive.
Does the CoFI Act actually apply to AI?
The CoFI Act requires fair conduct towards consumers of financial services. It does not mention AI specifically, but its obligations are technology-neutral. If an algorithm produces unfair outcomes for customers, the institution is responsible under CoFI regardless of whether the decision was made by a person or a model. The FMA has confirmed this interpretation.
What does the RBNZ expect from us on AI?
The RBNZ expects regulated entities to manage AI risks under their existing prudential obligations. This means operational resilience for AI-dependent systems, model risk management for AI models, and vendor risk management for AI providers. The RBNZ's "Rise of the Machines" analysis made clear that AI is a financial stability concern, not just an operational efficiency tool.
We're a smaller institution. Do we really need formal AI governance?
The scope should match your AI footprint, but the answer is yes. Even Kiwibank-scale institutions use AI for credit decisioning, fraud detection, and customer service. If AI affects customer outcomes, you need governance around it. We scale our approach to your size. A regional insurer needs different governance than a major bank, and we build accordingly.
What is the difference between proactive and reactive AI governance?
Reactive governance means scrambling to build policies after a regulator asks, an incident occurs, or rules are published. Proactive governance means having frameworks, risk assessments, and documentation in place before those triggers. In practical terms: proactive governance costs less, causes less disruption, and gives you input into how standards develop. Reactive governance costs more, takes longer, and happens under pressure.
How do you handle vendor concentration risk specifically?
We map your complete AI vendor ecosystem, identify single points of failure, and assess concentration at every layer: foundation models, cloud infrastructure, data providers, and application vendors. We then build contingency plans and diversification strategies that satisfy RBNZ prudential expectations without requiring you to abandon vendors that work well.
AI Governance for Your Financial Services Organisation
Schedule a conversation about your institution's AI footprint, the regulatory obligations that apply today, and how to build governance that positions you ahead of what comes next. No sales pitch. Just a clear-eyed discussion about where you stand.