Artificial Intelligence Risk Framework Development for New Zealand
New Zealand has no AI-specific risk regulation. That does not mean risk does not exist. It means the entire burden of identifying, classifying, and managing AI risk falls on your organisation. And under the Companies Act 1993, on your directors personally. We build the risk frameworks that protect your organisation.
We build bespoke AI risk frameworks grounded in NZ law, covering the Privacy Act 2020, Fair Trading Act 1986, Companies Act director duties, and Treaty of Waitangi obligations, so your organisation is not guessing at what "reasonable care" looks like when something goes wrong. Practical controls for organisations across Aotearoa New Zealand.
The Voluntary Approach Has a Catch
Light-touch regulation sounds like freedom. In practice, it means every AI risk decision your organisation makes is a judgement call. If that judgement turns out to be wrong, there is no prescribed standard to point to as a defence. We help organisations navigate this ambiguity with structured risk controls.
Director Exposure Under Companies Act
Section 137 of the Companies Act 1993 requires directors to exercise reasonable care, diligence, and skill. If an AI system causes harm and the board cannot demonstrate it understood and managed the risks, directors face personal liability. Most boards have no AI risk framework to rely on. We build the evidence trail directors need.
Treaty Obligations Add Unique Complexity
AI systems that process data relating to Māori and Pacific communities carry risk dimensions that do not exist anywhere else. Māori data sovereignty, cultural safety, and Treaty of Waitangi obligations create risk categories that no off-the-shelf framework addresses. Te Tiriti o Waitangi demands controls built specifically for Aotearoa, not imported from overseas jurisdictions.
Multiple Laws, No Unified View
Privacy Act 2020 covers personal information. Fair Trading Act 1986 covers misleading conduct. Companies Act covers director duties. Consumer Guarantees Act covers service quality. Your AI risks sit across all of them, but no single framework connects the dots. We build the integrated risk view your organisation needs, mapping every AI use case to its specific legal obligations across New Zealand law.
When 25% of New Zealand organisations identify governance as the "missing link" in their AI adoption, it signals something important: the technology is moving faster than the practices designed to contain it. In a principles-based regulatory environment, that gap belongs to your organisation. With generative AI projected to contribute over 15% to New Zealand's GDP by 2038, the opportunity demands governance to match.
- PolyGovern analysis of NZ AI governance landscape, drawing on industry survey data (2025)
Built for a Market Without a Rulebook
Importing an overseas risk framework and bolting it onto your organisation does not work. Aotearoa New Zealand's regulatory landscape is different: principles-based, light-touch, and shaped by obligations that are unique to this country. We build frameworks that reflect that reality rather than ignoring it.
Grounded In
NZ Regulatory Risk Mapping
We map every AI use case against the specific NZ legislation that applies: Privacy Act 2020 information privacy principles, Fair Trading Act misleading conduct provisions, Companies Act director duties, and sector-specific obligations from the FMA or RBNZ. Every risk is tied to an actual legal obligation that your organisation must manage, with no generic checklists.
AI Risk Taxonomy for Aotearoa
We develop a risk classification system that includes the categories generic frameworks miss: Māori data sovereignty risks, cultural safety impacts for Māori and Pacific populations, Treaty of Waitangi obligation breaches, vendor concentration risks from offshore AI providers, and algorithmic bias affecting communities that are already underserved. Risk classification built for New Zealand's unique context.
Director Liability Assessment
We analyse your AI portfolio through the lens of Companies Act section 137 duties, identifying where directors are most exposed. The output is a clear liability map showing which AI systems carry the highest personal risk and what controls would demonstrate reasonable care and diligence, giving your board the evidence it needs.
Controls and Mitigations Design
For each identified risk, we design proportionate controls: Privacy Act 2020 compliance checkpoints, Fair Trading Act content review processes, cultural safety evaluation workflows, vendor dependency thresholds, and model performance boundaries. Controls are practical enough to actually implement, not theoretical exercises that gather dust. These integrate with your existing risk processes.
Governance Integration and Reporting
We embed AI risk into your existing governance structures rather than layering on another committee. Board reporting templates translate AI risk into language directors can act on, with clear escalation triggers and decision rights tied to your organisational risk appetite. The result is governance that enables your teams, not a bureaucratic barrier.
What We Deliver
Every deliverable is built for Aotearoa New Zealand's legal and cultural context. Nothing is borrowed from another jurisdiction and relabelled. Practical tools your team can use for ongoing oversight.
NZ AI Risk Taxonomy
A classification system covering Privacy Act 2020 violations, Fair Trading Act breaches, Treaty of Waitangi obligation failures, Māori data sovereignty risks, algorithmic bias against Māori and Pacific communities, and vendor concentration exposure. Delivered as a structured document and Excel file for GRC integration across your organisation.
Privacy Act Risk Mapping
Detailed mapping of each AI system against the 13 Information Privacy Principles. Identifies where automated processing creates compliance exposure, what notifications are required under breach reporting obligations, and where cross-border data flows raise sovereignty concerns for New Zealand businesses.
Treaty Impact Assessment
Evaluation of how your AI systems affect Māori communities and Te Tiriti o Waitangi obligations. Covers Māori data sovereignty, cultural safety, equitable outcomes, and partnership principles. Designed by our consultants for organisations in government, health, education, and financial services across Aotearoa.
Director Liability Analysis
A board-ready analysis of personal liability exposure under Companies Act 1993 sections 131-138. Maps each AI system to director duties, identifies highest-risk scenarios, and provides the evidence trail directors need to demonstrate reasonable care. Essential compliance documentation for organisations using AI in New Zealand.
Cultural Safety Risk Evaluation
Assessment of how your AI systems affect Māori and Pacific populations specifically. Evaluates algorithmic bias, representational harm, language and cultural assumptions in training data, and equitable access to AI-driven services. A risk dimension unique to Aotearoa New Zealand.
Risk Register and Controls Library
Pre-populated AI risk register with NZ-specific risks, control mappings, and assessment fields. Includes 50+ controls covering Fair Trading Act compliance checks, Privacy Act 2020 safeguards, cultural safety reviews, vendor dependency management, and algorithmic accountability measures aligned to the Algorithm Charter. Practical solutions your team can implement immediately.
Who Needs an AI Risk Framework
If your organisation is deploying AI in New Zealand and nobody has asked "what could go wrong, and who is liable?", this is where you start. We work with organisations across every sector to build risk controls that protect against liability while enabling progress.
Financial Services Under FMA and RBNZ
Banks, insurers, and fund managers who need to demonstrate they are managing AI risks within their existing regulatory obligations before the Financial Markets Authority or Reserve Bank of New Zealand asks. We build compliance aligned to CoFI Act fair conduct requirements and operational resilience expectations.
Government and Public Sector
Agencies subject to the Public Service AI Framework that need to operationalise risk controls for AI systems affecting New Zealanders, with particular attention to Treaty of Waitangi obligations and algorithmic accountability under the Algorithm Charter.
Organisations Serving Māori and Pacific Communities
Health, education, and social service providers whose AI systems must account for cultural safety, Māori data sovereignty, and equitable outcomes for underserved populations. We address Māori data governance and Te Tiriti compliance requirements.
Directors and Board Members
Directors who want documented evidence that AI risks are being managed, because under Companies Act 1993, "we didn't know" is not a viable defence. We provide the oversight structure and documentation boards need.
Common Questions About AI Risk Frameworks in New Zealand
What AI risks are unique to New Zealand?
Three categories stand out. First, Treaty of Waitangi obligations create risk dimensions around Māori data sovereignty, cultural safety, and equitable outcomes that exist nowhere else. Te Tiriti demands controls built specifically for Aotearoa. Second, New Zealand's small market means heavy reliance on offshore AI vendors, creating concentration risks if a single provider fails or changes terms. Third, the absence of AI-specific regulation means your organisation bears full responsibility for defining "reasonable" risk management. There is no prescribed standard to fall back on. We address all three dimensions in every framework we build.
How do Treaty of Waitangi obligations affect AI risk?
Te Tiriti principles of partnership, participation, and protection apply to how AI systems collect, process, and make decisions about Māori. Risks include training data that underrepresents or misrepresents Māori, algorithms that produce inequitable outcomes for Māori and Pacific populations, and Māori data sovereignty issues when Māori data is processed offshore without appropriate governance. Our framework includes specific risk categories, controls, and compliance documentation for Treaty obligations that no generic framework addresses.
What does the FMA expect regarding AI risk management?
The Financial Markets Authority expects regulated entities to manage AI risks under their existing obligations, including conduct licensing, fair dealing, and client care duties under the CoFI Act 2022. While there is no AI-specific rule, the FMA has signalled that entities using AI for financial advice, credit decisions, or customer interactions should demonstrate governance proportionate to the risk. The RBNZ holds the same expectation for operational resilience. We build risk frameworks that align to these expectations and position your organisation ahead of any future formalisation.
Can directors be personally liable for AI failures?
Yes. Under sections 131-138 of the Companies Act 1993, directors must act in good faith, in the best interests of the company, and with reasonable care, diligence, and skill. If an AI system causes significant harm, such as privacy breaches under the Privacy Act 2020, discriminatory outcomes, or misleading content under the Fair Trading Act, and the board cannot demonstrate it took reasonable steps to understand and manage those risks, individual directors face personal liability. A documented AI risk framework is the clearest evidence of due diligence. We build these frameworks for organisations across New Zealand.
How does this integrate with OECD AI Principles and ISO 42001?
New Zealand's National AI Strategy is built on the OECD AI Principles, and our risk frameworks align to these international standards while addressing NZ-specific requirements. For organisations pursuing ISO 42001 certification, our risk assessment methodology maps directly to Annex B control requirements. This means your risk framework serves double duty: addressing local compliance needs while building toward international recognition. Your risk framework serves double duty: local compliance and international recognition.
How long does this take and what does the engagement look like?
Most engagements run 8-14 weeks: regulatory mapping and discovery (2-3 weeks), taxonomy and assessment design (3-5 weeks), controls development and Treaty impact assessment (2-3 weeks), and integration and delivery (1-3 weeks). The timeline depends on the number of AI systems in scope and the regulatory complexity of your sector. We provide a fixed-price proposal after an initial scoping conversation.
Related Services
AI Governance Consulting
Full governance programme design covering operating models, committee structures, and decision-making frameworks for New Zealand organisations. We help organisations build sustainable oversight.
Learn more →AI Audit and Assessment
Independent evaluation of your current AI governance maturity, identifying gaps in compliance and risk management before they become incidents.
Learn more →ISO 42001 Certification
Achieve the international AI management system standard, providing structured proof of responsible AI governance. We guide organisations through certification in Auckland, Wellington, and Christchurch.
Learn more →Build Your AI Risk Framework for New Zealand
In a market with no prescribed AI risk framework, the organisations that build their own set the standard. Talk to us about developing a risk framework that reflects NZ law, Treaty of Waitangi obligations, and the reality of how your organisation actually uses AI.