Context-Specific Assessment

Artificial Intelligence Impact Assessment for New Zealand Organisations

New Zealand has no mandated AI impact assessment regime. But the Privacy Act 2020, the Companies Act 1993, and the Code of Health and Disability Services Consumers' Rights all create liability for artificial intelligence-driven decisions. The gap between "no specific AI law" and "no legal exposure" is where organisations get into trouble.

We conduct AI impact assessments designed for the New Zealand context, including cultural safety for Māori and Pacific populations, Treaty of Waitangi compliance, and population-appropriate methodologies that international frameworks consistently miss.

See Our Methodology
AI Impact Assessment Dashboard
Cultural Safety Analysis
Treaty of Waitangi Compliance
Director Liability Review
OECD AI Principles Aligned

International Frameworks Were Not Built for Aotearoa

Most artificial intelligence risk frameworks originate from the EU, UK, or North America. They assume different population structures, different legal traditions, and different cultural obligations. Applying them directly to New Zealand creates blind spots that put organisations and communities at risk.

What off-the-shelf AI assessments miss in New Zealand:

  • Treaty of Waitangi obligations: AI impacts on Māori communities require specific assessment that no international framework addresses
  • Cultural safety for Māori and Pacific populations: bias testing calibrated for US or European demographics produces misleading results for Aotearoa's population
  • Health Information Privacy Code 2020: healthcare AI needs assessment against NZ-specific information privacy rules, not generic international analogies
  • Companies Act 1993 director liability: directors face personal exposure for AI harms under NZ law, a risk profile that differs from other jurisdictions
  • Māori data sovereignty: AI systems that process data about Māori must address kaitiakitanga obligations that have no equivalent in international standards

Waitemata Healthcare discovered this when international AI frameworks proved inappropriate for their clinical context. They are not unique. Any organisation serving New Zealand's population mix needs assessment methodologies built for that population, not adapted from overseas templates.

No Mandate, Real Liability

New Zealand does not mandate AI impact assessments. But the Privacy Act 2020, Consumer Guarantees Act, Fair Trading Act 1986, and Companies Act 1993 all apply to AI-driven decisions. Directors cannot claim ignorance of AI harms as a defence. The FMA and RBNZ expect regulated entities to manage these risks under existing obligations. The absence of specific regulation does not mean the absence of legal consequences.

Cultural Safety Gaps

AI systems trained on international datasets often produce discriminatory outcomes for Māori and Pacific populations. Healthcare triage algorithms, credit scoring models, and recruitment tools all carry this risk. Standard fairness metrics do not account for the specific equity obligations New Zealand organisations hold under the Treaty of Waitangi and the OECD AI Principles commitment to inclusive growth.

Unanswered Vendor Questions

What happens to personal data if your AI vendor is acquired or goes bankrupt? Who is accountable when a system fails? Are conflicts of interest between your AI provider and your organisation documented? Who monitors performance after deployment? The RBNZ has flagged vendor concentration risk as a concern for financial stability. Our assessments answer these questions.

AI Impact Assessment Methodology Built for the New Zealand Context

Our approach starts from the premise that AI operating in Aotearoa must be assessed against Aotearoa's legal landscape, population needs, and cultural obligations. We do not adapt international templates. We built our NZ methodology from the ground up, aligned with OECD AI Principles and Te Tiriti o Waitangi.

Cultural Impact Assessment

We evaluate how AI systems affect Māori and Pacific communities specifically. This includes testing for disparate outcomes across NZ ethnic groups, assessing whether Māori data sovereignty principles are upheld, and reviewing whether AI decision-making respects the partnership, protection, and participation principles of Te Tiriti.

NZ Legal Exposure Mapping

Our consultants map every AI system against the Privacy Act 2020, the Health Information Privacy Code 2020, the Code of Health and Disability Services Consumers' Rights, the Consumer Guarantees Act, the Fair Trading Act 1986, and Companies Act 1993 director duties. For financial services organisations, we include FMA conduct expectations and RBNZ prudential risk requirements. You receive a clear picture of where liability sits and who holds it.

Population-Appropriate Methods

Fairness testing must reflect the population the AI system serves. We use NZ-specific demographic benchmarks, test against NZ ethnic categories, and assess outcomes for population groups that international tools ignore entirely. An assessment calibrated for London or San Francisco is not calibrated for Auckland, Wellington, or Christchurch.

What Our AI Impact Assessment Covers

Comprehensive assessment across legal, cultural, and operational dimensions, all calibrated for New Zealand's regulatory environment and the organisations operating within it.

Treaty of Waitangi Impact Review

We assess how AI systems affect Māori communities, whether data governance upholds Māori data sovereignty, and whether decision-making processes align with Treaty principles. This includes evaluating consultation mechanisms, representation in training data, and equitable outcomes across iwi and hapu. For Crown agencies and public sector organisations, this assessment is essential for meeting Te Tiriti obligations.

Privacy Act 2020 and HIPC Compliance

AI systems that process personal information must comply with the Privacy Act's 13 Information Privacy Principles. Healthcare AI faces additional obligations under the Health Information Privacy Code 2020. We test whether your systems meet both, including cross-border data transfer restrictions that affect cloud-based services used by organisations across New Zealand.

Cultural Safety Analysis

We test AI outputs for disparate impact on Māori and Pacific populations using NZ-appropriate benchmarks. Healthcare AI receives particular scrutiny against the Code of Health and Disability Services Consumers' Rights, which guarantees freedom from discrimination and the right to services that account for cultural needs.

Director Liability Assessment

Under the Companies Act 1993, directors must exercise reasonable care, diligence, and skill. If an AI system causes harm and the board had no visibility over its risks, directors face personal liability. We identify which risks create director exposure and recommend governance structures to manage that exposure, essential for organisations operating under the NZX Corporate Governance Code.

Vendor and Supply Chain Risk

What happens to your data if your AI vendor is sold? Where does your intellectual property sit? Are there conflicts of interest between your provider's commercial goals and your organisation's obligations? The RBNZ has identified vendor concentration risk as a significant concern. We assess third-party AI arrangements for contractual gaps, data protection risks under the Privacy Act 2020, and ongoing monitoring responsibilities.

Ongoing Accountability Structure

An impact assessment is a point-in-time exercise. AI systems change. We evaluate whether your organisation has the monitoring, escalation, and review mechanisms to maintain accountability after our assessment is complete. Who is responsible when performance degrades? Our report answers that question and establishes the framework for ongoing oversight.

Our Assessment Process

We follow a structured methodology that combines technical rigour with cultural competency. Every assessment is led by people who understand both the AI landscape and the specific regulatory and cultural context of New Zealand.

AI Impact Assessment Methodology
1

Scoping and Context Mapping (1-2 weeks)

We identify every AI system in scope, map the populations they affect, and document the legal and cultural obligations that apply. This includes stakeholder interviews across leadership, operations, and, where relevant, iwi or community liaison. We define which NZ-specific standards apply to each system, from the Privacy Act 2020 through to Māori data sovereignty requirements.

2

Technical and Cultural Assessment (2-6 weeks)

Parallel workstreams assess technical performance and cultural safety. Our team conducts bias testing using NZ demographic benchmarks, evaluates Privacy Act 2020 and HIPC compliance, reviews Treaty of Waitangi impact, and tests third-party vendor arrangements for contractual and data protection gaps. For financial services organisations, we assess alignment with FMA conduct expectations and RBNZ operational resilience requirements.

3

Findings and Liability Mapping (1-2 weeks)

Our specialists consolidate findings into a structured report that maps each risk to its legal basis, whether Privacy Act provision, Companies Act duty, Treaty principle, or consumer rights obligation. Risks are rated by severity and likelihood. Recommendations include specific remediation steps with accountability owners. We align findings with the OECD AI Principles to provide internationally recognised benchmarks.

4

Governance Implementation Support (1-2 weeks)

We present findings to boards and leadership teams, establish ongoing monitoring frameworks, and transfer the knowledge your organisation needs to maintain AI accountability independently. This includes templates for ongoing cultural safety review and Treaty compliance monitoring.

Sector-Specific Impact Assessment for NZ Industries

Different sectors face different AI risks. We tailor every assessment to the regulatory obligations, population impacts, and operational realities of your industry.

Financial Services

Banks, insurers, and KiwiSaver providers using AI for credit decisions, claims processing, or customer advice face scrutiny from both the FMA and RBNZ. Our assessments evaluate algorithmic fairness in lending and insurance pricing, compliance with the Conduct of Financial Institutions Act 2022, and the vendor concentration risks that regulators have identified.

Healthcare

Healthcare organisations including Te Whatu Ora face unique assessment requirements. We evaluate AI systems against the Health Information Privacy Code 2020, the Code of Health and Disability Services Consumers' Rights, and the cultural safety obligations of serving Māori and Pacific patients. Drawing on the Waitemata Healthcare governance model, we deliver context-appropriate assessment for clinical AI that international frameworks cannot provide.

Government and Public Sector

Crown agencies and local government bodies such as Auckland Council and Wellington City Council must align with the Public Service AI Framework and the Algorithm Charter. We assess AI systems against these frameworks alongside Te Tiriti obligations, Government Procurement Rules compliance, and the transparency commitments that the New Zealand public expects.

Technology and Mid-Market

SaaS companies and technology businesses in Auckland's growing tech hub face assessment needs driven by customer requirements, international compliance (including the EU AI Act for those selling into Europe), and ISO 42001 certification pathways supported by Callaghan Innovation.

What You Receive

Cultural Safety Assessment

  • Disparate impact analysis across New Zealand ethnic populations
  • Māori and Pacific population outcome testing aligned with Māori data sovereignty principles
  • Recommendations for culturally appropriate AI governance

Treaty Impact Review

  • Assessment against partnership, protection, and participation principles of Te Tiriti o Waitangi
  • Māori data governance compliance evaluation using Te Mana Raraunga principles
  • Remediation plan for identified Treaty compliance gaps

Legal Liability and Compliance Report

  • Privacy Act 2020 and HIPC compliance analysis per AI system
  • Companies Act 1993 director duty risk assessment with mitigation strategies
  • Vendor contract gap analysis and data protection review for offshore AI providers

Ongoing Monitoring Framework

  • AI performance monitoring templates and escalation paths
  • Cultural safety review cycle documentation aligned with Treaty obligations
  • Accountability assignment for each AI system and risk

Frequently Asked Questions

If there is no mandatory AI assessment in New Zealand, why should we do one?

Because the liability exists even without a specific AI law. The Privacy Act 2020 applies to AI systems that process personal information. The Companies Act 1993 creates director duties that extend to AI governance. The Code of Health and Disability Services Consumers' Rights applies to AI in healthcare. The FMA and RBNZ expect regulated entities to manage technology risks under existing obligations. Conducting an assessment proactively is significantly cheaper than responding to a Privacy Commissioner investigation or defending a director liability claim.

How is a cultural safety assessment different from standard bias testing?

Standard bias testing typically checks for disparate outcomes across broad demographic categories using international benchmarks. Cultural safety assessment goes further: it tests against NZ-specific population groups, evaluates outcomes against Treaty of Waitangi principles, assesses whether Māori data sovereignty is respected, and examines whether AI decision-making accounts for the cultural context of the communities it affects. The distinction matters because an AI system can pass generic bias testing while still producing inequitable outcomes for Māori and Pacific populations in Aotearoa.

What if our AI vendor is based overseas?

This creates additional assessment considerations. The Privacy Act 2020 restricts cross-border data transfers. We evaluate vendor contracts for data protection adequacy, assess what happens to your data if the vendor is acquired or ceases operations, identify conflicts of interest, and determine whether your organisation retains meaningful control over AI systems hosted offshore. The RBNZ has specifically identified reliance on a small number of providers as a risk to financial stability.

Do directors really face personal liability for AI harms?

Under the Companies Act 1993, directors must act with reasonable care, diligence, and skill. If a board has no visibility over AI systems operating in the organisation, with no risk register, no impact assessment, no monitoring framework, and those systems cause harm, the lack of governance itself becomes the liability exposure. Our assessment establishes the governance baseline New Zealand directors need. As AI governance becomes mainstream practice, the standard of what constitutes "reasonable" oversight will only increase.

How long does a NZ-context assessment take?

Typically 4 to 10 weeks depending on scope. A focused assessment of a single high-risk AI system (such as a healthcare triage tool used by Te Whatu Ora) takes 4 to 6 weeks. A comprehensive assessment across an organisation's full AI portfolio, including cultural safety analysis and vendor reviews, takes 8 to 10 weeks. We scope based on your risk profile, the populations your systems serve, and which legal obligations apply.

What ongoing responsibilities remain after the assessment?

AI systems are not static. Models drift, vendor arrangements change, and populations evolve. Our assessment includes an ongoing monitoring framework that defines who is responsible for continued oversight, what triggers a reassessment, and how cultural safety is maintained over time. We build the capability for your organisation to sustain accountability without permanent reliance on external assessors.

Request an AI Impact Assessment for Your Organisation

An independent assessment built for the New Zealand context gives you clarity on legal exposure, cultural safety, and governance gaps before they become complaints, investigations, or front-page stories.

No-obligation initial consultation | Fixed-price engagements | NZ-context methodology