Governance Foundation

AI Policy Development for New Zealand Organisations

New Zealand has no artificial intelligence-specific legislation. Your internal policies are the primary governance mechanism for AI use and the standard against which regulators and courts will judge your organisation. The Privacy Act 2020, Fair Trading Act 1986, and Companies Act 1993 all apply, but none prescribe AI-specific controls. We build policy suites that fill that gap, grounded in Te Tiriti o Waitangi obligations, OECD AI Principles, and sector-specific expectations from the FMA and RBNZ.

81% of New Zealand leaders are aware of AI risks, yet only 6% are confident in their governance readiness. We help organisations close that gap with practical, enforceable policies that translate compliance obligations into controls your team can follow.

See the Full Policy Suite
AI Policy Management Dashboard
No AI-Specific Law in NZ
Your internal policies are your first line of defence

Why Artificial Intelligence Policy Development Cannot Wait for NZ Legislation

New Zealand's voluntary, principles-based approach places the burden of defining acceptable AI use on each organisation. The Algorithm Charter is opt-in. The Public Service AI Framework applies to agencies, not the private sector. Without internal policies that translate these frameworks into operational controls, there is no governance.

The Voluntary Gap

The National AI Strategy (July 2025) and Public Service AI Framework both rely on voluntary adoption rather than prescriptive regulation. Without mandated guardrails, organisations that lack internal policies have no documented standards for how AI should be used, procured, or governed. Staff use generative AI tools without knowing the boundaries. Managers approve deployments without understanding their Privacy Act 2020 obligations. Directors face personal liability under the Companies Act 1993 for failures they may not even know exist. Well-crafted policies are the foundation on which all other controls are built.

Privacy Act 2020 Exposure

The Privacy Act 2020's 13 Information Privacy Principles already apply to every AI system that processes personal information, and most organisations have not mapped these obligations to their tools. Principle 1 constrains how data can be used for model training. Principle 6 creates rights for individuals to understand how algorithmic decisions about them are made. Principle 8 demands that outputs are reliable, with correction mechanisms. Principle 12 restricts where data can flow to offshore platforms. The Privacy Commissioner has signalled increasing scrutiny of automated processing. Without policies that map these principles to your AI systems, you have no documented compliance defence.

Te Tiriti Obligations Unaddressed

Crown agencies and organisations serving Māori communities face Treaty of Waitangi obligations that extend to AI in ways generic policies never address. Data kaitiakitanga, tino rangatiratanga over information, equitable algorithmic outcomes, and meaningful partnership in system design reflect the constitutional fabric of Aotearoa. The Public Service AI Framework explicitly requires Crown agencies to consider Treaty obligations in AI deployment. Private sector organisations processing data about Māori communities face growing expectations from iwi, regulators, and the Privacy Commissioner. Effective governance must embed Māori data governance from the outset as a foundational commitment, not an afterthought.

"The Privacy Act does not distinguish between decisions made by humans and decisions made or assisted by automated systems. The same privacy principles apply regardless of the technology used."

- Office of the Privacy Commissioner, New Zealand

Artificial Intelligence Policy Development Suite Built for the NZ Context

Eight interconnected policies addressing the compliance obligations, cultural expectations, and regulatory realities facing New Zealand organisations, from Privacy Act 2020 mapping to Treaty of Waitangi integration, from FMA and RBNZ sector requirements to OECD AI Principles alignment.

1

AI Acceptable Use Policy

Whole-of-organisation boundaries

Sets clear parameters for every employee on which AI tools are sanctioned, what data must never be entered, and when human review is mandatory. Covers data classification rules aligned to the 13 Information Privacy Principles, prohibited inputs including client data and iwi-sensitive information, and human-in-the-loop requirements for decisions affecting individuals.

Sanctioned Tools Register Data Classification Rules Human-in-the-Loop Requirements
2

Te Tiriti & Ethical AI Policy

Treaty-grounded principles

Operationalises OECD AI Principles and Te Tiriti o Waitangi obligations into enforceable internal standards. Addresses data kaitiakitanga and the principle that Māori communities retain rangatiratanga over their data regardless of who holds it. Covers equitable algorithmic outcomes through bias testing against NZ demographic data, whanau-centred impact assessment for AI systems affecting Māori communities, iwi consultation protocols, and alignment with the Public Service AI Framework's Treaty requirements for Crown agencies.

Data Kaitiakitanga Equity Assessment Partnership Obligations
3

AI Procurement & Vendor Policy

Offshore vendor risk for NZ

New Zealand organisations rely heavily on offshore AI platforms with data processed outside NZ jurisdiction. This policy covers cross-border data transfer assessments under Privacy Act 2020 Principle 12, vendor due diligence criteria tailored to AI-specific risks including model training on customer data and sub-processor chains, data residency requirements reflecting FMA and RBNZ obligations, and contractual protections that account for the negotiating reality of a small-market buyer dealing with global platforms.

Cross-Border Transfers Vendor Due Diligence Data Residency
4

AI Development & Deployment Policy

For organisations building or customising AI

Development standards covering model documentation, bias testing against NZ demographic data including outcomes for Māori and Pacific populations, deployment gates tied to risk classification consistent with the OECD AI Principles, and ongoing monitoring obligations. Aligned to the Public Service AI Framework's tiered risk approach and ISO 42001 requirements for organisations pursuing certification.

NZ Bias Testing Risk-Tiered Gates Model Documentation
5

AI Data Governance Policy

13 Privacy Principles mapped

Maps each of the Privacy Act 2020's 13 Information Privacy Principles to practical AI data handling requirements. Covers training data provenance and purpose limitation (Principles 1-4), access and correction rights for algorithmic decisions (Principles 6-7), accuracy obligations (Principle 8), retention limits (Principle 9), and cross-border transfer restrictions (Principle 12). Includes Māori data governance protocols for data sets that engage Treaty of Waitangi obligations.

Privacy Principle Mapping Māori Data Governance Training Data Provenance
6

AI Incident Response Policy

Breach notification and escalation

Incident classification tailored to NZ regulatory reporting requirements, because AI failures trigger compliance obligations that generic incident response plans do not address. Covers mandatory Privacy Commissioner notification under the Privacy Act 2020's breach regime, FMA and RBNZ notification procedures for financial sector failures, Treaty of Waitangi impact assessment for incidents affecting Māori communities, and post-incident improvement cycles that prevent recurrence.

Privacy Commissioner Notification FMA/RBNZ Reporting Post-Incident Review
7

Generative AI Usage Policy

ChatGPT, Copilot, Claude guardrails

Practical governance for the AI tools NZ employees are already using, because generative AI adoption has outpaced policy development in most organisations. Covers approved platforms and licence terms, prohibited inputs including client data and iwi-sensitive information, output accuracy verification to mitigate hallucination risks, intellectual property considerations under the Copyright Act, and Fair Trading Act 1986 obligations for consumer-facing AI-generated content.

Platform Approvals Prohibited Inputs IP Considerations
8

AI Training & Capability Policy

Closing the awareness-to-action gap

Structured capability programme addressing the awareness-confidence gap across NZ organisations. Tiered training by role: foundational AI literacy for all staff covering Privacy Act 2020 basics, practitioner skills for active users including sector-specific compliance from the FMA and RBNZ, technical standards for developers aligned to OECD AI Principles and ISO 42001, and governance literacy for boards covering Companies Act 1993 director duties and Treaty of Waitangi obligations.

Tiered Programme Board Governance Literacy Competency Assessment

Implementation Approach: Built for Adoption

The challenge in New Zealand is not writing policies. It is writing policies that people follow when there is no mandatory AI compliance framework to fall back on. We centre every engagement on making governance the path of least resistance, translating Privacy Act 2020 obligations, Treaty of Waitangi responsibilities, and sector-specific expectations into practical controls that integrate with how your organisation actually works.

AI Policy Implementation Tracker

NZ Regulatory and Standards Alignment

  • Privacy Act 2020 - all 13 Information Privacy Principles
  • Te Tiriti o Waitangi and data kaitiakitanga principles
  • Public Service AI Framework (February 2025)
  • Fair Trading Act and Consumer Guarantees Act
  • OECD AI Principles and ISO/IEC 42001:2023
1

Landscape Mapping

We audit your existing AI footprint, including shadow AI, and map applicable compliance obligations under the Privacy Act 2020, sector-specific regulators (FMA, RBNZ), the Fair Trading Act 1986, Companies Act 1993 director duties, and any Treaty of Waitangi requirements. We then assess your current policy posture against the OECD AI Principles and identify gaps.

2

Collaborative Drafting

We work alongside your legal, privacy, IT, and HR teams to draft policies that reflect how your organisation actually operates. For Crown agencies, this includes Treaty of Waitangi-aligned language and Public Service AI Framework alignment. For regulated entities, we embed FMA and RBNZ conduct expectations. Policies that people cannot follow create governance theatre, not genuine oversight.

3

Stakeholder Review

Structured review cycles with governance committees, board risk subcommittees, and where appropriate, iwi or community stakeholders. We facilitate sign-off rather than leaving your team to coordinate approvals across multiple parties.

4

Adoption and Embedding

We produce communication kits, manager talking points, and staff quick-reference guides in plain language. Policies are embedded into existing workflows rather than sitting in a SharePoint folder.

5

Review Cycle Design

We establish a structured review cadence with triggers linked to regulatory changes, Privacy Commissioner guidance updates, and shifts in the NZ AI landscape. Policies are living documents that evolve as the environment matures.

What You Receive

A complete governance package, not a set of templates. Every deliverable is tailored to your organisation's compliance obligations.

Customised Policy Suite

Six to eight AI governance policies tailored to your sector, size, and regulatory obligations. Delivered in editable format with version control protocols so your team can maintain and update them independently.

Privacy Principle Mapping

A detailed matrix mapping each of the 13 Information Privacy Principles to your specific AI systems and data flows. This becomes your reference document for Privacy Commissioner engagement.

Staff Communication Kit

Plain-language summaries, one-page quick-reference cards, and manager briefing packs designed for NZ workplace culture. Policies are only effective when people understand them.

Te Tiriti Compliance Guide

For Crown agencies and public sector organisations: a standalone guide mapping Treaty of Waitangi obligations to AI governance decisions, including consultation protocols and data kaitiakitanga implementation guidance.

Governance Committee Charter

Terms of reference for an AI governance committee or risk subcommittee, including membership, meeting cadence, decision authority, and reporting lines appropriate for NZ board structures.

Regulatory Horizon Scanner

A structured monitoring framework for tracking changes from the Privacy Commissioner, FMA, RBNZ, and the evolving National AI Strategy. Includes review triggers so policies are updated when the landscape shifts.

Frequently Asked Questions

If there is no AI-specific law in NZ, why do we need AI policies?

Because existing laws already apply. The Privacy Act 2020 governs every AI system that processes personal information. The Fair Trading Act 1986 prohibits misleading conduct regardless of whether a human or an algorithm generates it. The Companies Act 1993 creates personal liability for directors who fail to exercise reasonable oversight. The Human Rights Act 1993 prohibits algorithmic discrimination. Without policies that map these obligations to your AI usage, you have no documented defence if something goes wrong. The absence of AI-specific law makes internal policies more important, not less.

How do you handle Te Tiriti obligations in commercial organisations?

Treaty of Waitangi obligations are most direct for Crown agencies, where the Public Service AI Framework explicitly requires Te Tiriti considerations in AI deployment. But commercial entities working with Māori communities or processing Māori data also benefit from Te Tiriti-aligned policies. We tailor the scope based on your organisation's relationship with Māori stakeholders and the nature of your AI use cases. For some, this means a comprehensive Māori data governance framework across every policy. For others, it means targeted provisions in data governance and acceptable use policies.

Our team is small. Do we really need eight separate policies?

Not necessarily. For smaller organisations with fewer AI systems, we consolidate the suite into fewer, broader documents that cover the same compliance ground. A mid-sized NZ organisation might start with three core policies (acceptable use, data governance, and incident response) and expand as maturity grows. We right-size the suite to your scale, sector, and compliance obligations.

How do the policies address offshore AI platforms like ChatGPT and Microsoft Copilot?

Most AI tools used by NZ organisations process data in overseas jurisdictions, creating compliance obligations under the Privacy Act 2020 that generic policies do not address. Our policies cover cross-border data transfers under Principle 12, contractual protections for data processed offshore, practical controls for staff using cloud-based tools where data may leave NZ, and due diligence procedures for evaluating vendor hosting and sub-processors. For organisations using ChatGPT, Microsoft Copilot, Claude, and similar platforms, these policies close a critical gap.

Start AI Policy Development Before the Regulator Demands It

In New Zealand's voluntary landscape, proactive policy development demonstrates governance maturity and positions your organisation ahead of whatever requirements emerge from the National AI Strategy, the Privacy Commissioner's evolving guidance, and FMA and RBNZ scrutiny.

Start with a Governance Assessment