Director Liability + AI Oversight

Board-Level AI Governance for New Zealand Directors

Sections 131 to 138 of the Companies Act 1993 do not mention artificial intelligence. They do not need to. Your duty of care, diligence, and skill applies to every system your organisation deploys, including the AI ones approved without board oversight. Section 131 demands good faith. Section 137 demands the care, diligence, and skill of a reasonable director. Section 135 prohibits reckless trading. These compliance obligations are technology-agnostic, and so is director liability. When an AI system makes a flawed credit decision, produces a biased recruitment outcome, or breaches the Privacy Act 2020 through automated processing of personal information, the question will not be whether the board understood the technology. It will be whether directors took reasonable steps to govern it.

76% of New Zealand leaders are prioritising AI agents. One in four says governance is the missing link. Yet most boards across Aotearoa have no structured oversight of the AI systems their organisations already run. We close that gap with governance built for this jurisdiction, grounded in the Companies Act 1993, the Privacy Act 2020, FMA and RBNZ expectations, and Treaty of Waitangi obligations. These are not generic frameworks borrowed from offshore. They reflect how governance actually works in New Zealand.

See Our Approach
Board-Level AI Governance Reporting Dashboard

Why NZ Boards Cannot Afford to Wait on Artificial Intelligence Governance

New Zealand has no AI-specific legislation. That is not a comfort. It is a trap. Without prescriptive rules, your existing director duties under the Companies Act 1993 become the standard against which your AI oversight will be judged. The FMA has signalled that conduct obligations extend to AI-enabled services. The RBNZ expects prudential risk management to encompass algorithmic systems. The Privacy Commissioner is interpreting the Privacy Act 2020 for automated decision-making. When regulators and courts assess director conduct, they will look at what comparable boards were doing. Proactive governance is the only defence, and it needs to be built before the scrutiny arrives, not after.

Personal Liability Under the Companies Act

Section 131 requires directors to act in good faith and in the best interests of the company. Section 137 demands the care, diligence, and skill of a reasonable director. Section 135 prohibits reckless trading, a provision that gains new relevance when organisations deploy AI systems without governance oversight. When an AI system causes harm, whether biased lending decisions that trigger FMA scrutiny, privacy breaches that require mandatory notification to the Privacy Commissioner under the Privacy Act 2020, or flawed automated advice that exposes the organisation to Fair Trading Act 1986 liability, the question is not whether the board understood the technology. It is whether they took reasonable steps to govern it. Directors who cannot demonstrate informed oversight of their organisation's AI systems face personal exposure that no D&O insurance policy may fully cover. The standard of "reasonable" rises as AI governance becomes mainstream across Aotearoa.

Voluntary Landscape Means Boards Set the Standard

New Zealand's AI governance environment is largely voluntary. The government's Algorithm Charter is opt-in. The Public Service AI Framework applies to agencies, not the private sector. The National AI Strategy, published in July 2025, making New Zealand the last OECD country to formalise its AI position, takes a principles-based approach grounded in the OECD AI Principles rather than prescriptive regulation. There is no NZ equivalent of the EU AI Act. This means your board is not following a rulebook. It is writing one. The organisations that establish rigorous governance now will define what "reasonable" looks like when regulators and courts inevitably assess director conduct. In a voluntary landscape, proactive compliance is the strongest defence. Boards that wait for mandatory rules will find themselves building governance under pressure, public scrutiny, and regulatory attention.

Te Tiriti Obligations at Board Level

AI systems that process data about Māori communities, deliver services to Māori, or operate in sectors with Crown obligations raise questions that technology teams cannot answer alone. Māori data governance, tino rangatiratanga over information, meaningful partnership in system design, and equitable algorithmic outcomes. These are governance-level decisions that demand board attention, not delegation to project managers. The Public Service AI Framework explicitly requires Crown agencies to consider Treaty of Waitangi obligations in AI deployment. Private sector boards serving Māori communities face growing expectations from iwi, from the Office of the Privacy Commissioner, and from the broader Aotearoa community. Boards that delegate Treaty considerations to IT departments are exposing both the organisation and themselves to reputational, legal, and compliance risk. In Aotearoa, kaitiakitanga must be embedded at the highest level of governance as a board-level commitment, not an operational afterthought.

How Our Team Delivers Board-Level AI Governance for NZ

Four integrated workstreams designed for the New Zealand governance context. Not a framework borrowed from another jurisdiction, but built for the Companies Act 1993, the Privacy Act 2020, FMA and RBNZ expectations, and Te Tiriti o Waitangi. Every workstream reflects the realities of governing AI in Aotearoa's unique regulatory and cultural landscape.

Director Liability Briefings

Most directors understand their general duties under the Companies Act 1993. Few have considered how those duties apply when management deploys machine learning models, generative AI tools, or autonomous decision-making systems that process personal information under the Privacy Act 2020. We run structured briefings that translate sections 131 through 138 of the Companies Act into concrete AI governance expectations: what directors must ask, what documentation to require, where personal liability exposure sits, and how D&O insurance coverage applies to AI-related claims. We cover the intersection of director duties with FMA conduct expectations, RBNZ prudential requirements, and the Fair Trading Act 1986's implications for AI-generated consumer communications. These briefings form the foundation of effective risk management at board level, giving directors the confidence to exercise informed oversight of AI systems without needing to become technologists.

  • Companies Act duty mapping to AI risk categories
  • Personal liability scenarios and case analysis
  • D&O insurance gap assessment for AI-related claims
  • Director question frameworks for management reporting

NZ Regulatory Landscape Education

The FMA expects conduct obligations to extend to AI in financial services, scrutinising algorithmic advice, automated credit decisions, and AI-driven customer outcomes with the same rigour as human decisions. The RBNZ expects prudential risk oversight to encompass technology systems, with growing attention to model risk, operational resilience, and concentration risk from third-party AI providers. The Office of the Privacy Commissioner has signalled algorithmic decision-making as a priority under the Privacy Act 2020, with specific guidance on transparency, the information privacy principles, and the emerging interpretation of automated processing obligations. The National AI Strategy, grounded in the OECD AI Principles, is influencing how all regulators think about AI oversight. These compliance expectations exist today, even without AI-specific legislation. We bring boards up to speed on what each regulator expects, where their organisation sits, and what governance gaps create the greatest risk management exposure, translating regulatory signals into actionable governance strategies that satisfy the Companies Act 1993's standard of reasonable oversight.

  • FMA conduct expectations for AI-enabled services
  • RBNZ prudential risk expectations mapping
  • Privacy Act 2020 automated decision-making obligations
  • Algorithm Charter and Public Service AI Framework briefing

Treaty-Informed Governance Design

Te Tiriti o Waitangi creates obligations that cannot be addressed by a privacy impact assessment or a standard risk register. When AI systems collect, process, or make decisions using data relating to Māori, boards need governance structures that reflect partnership, protection, and participation at a constitutional level. The Public Service AI Framework makes this explicit for Crown agencies, but the obligations extend further. Any organisation serving Māori communities or processing data about Māori faces expectations that are grounded in the fabric of Aotearoa's governance. Our team helps boards integrate these Treaty of Waitangi obligations into their AI governance substantively, not as an afterthought or a compliance checkbox. We work with directors to understand data kaitiakitanga, tino rangatiratanga over information, and the practical governance mechanisms that demonstrate genuine partnership, from iwi consultation protocols to equitable outcome monitoring. Genuine Māori data governance requires board-level commitment, and we ensure your governance reflects that commitment with mana.

  • Māori data sovereignty assessment for AI systems
  • Board reporting on Treaty compliance in AI deployments
  • Consultation frameworks for AI affecting Māori communities
  • Alignment with Te Mana Raraunga principles

Governance Structure and Charter Development

Knowing the risks is only valuable if it changes how your board operates. We design practical governance structures, including committee mandates, escalation thresholds, reporting cadences, and decision authorities, that give directors genuine oversight without requiring them to become technologists. For NZX-listed companies, we align these structures with the NZX Corporate Governance Code. For Crown entities, we integrate Public Service AI Framework requirements and Treaty of Waitangi obligations. For FMA and RBNZ-regulated businesses, we ensure governance structures satisfy the conduct and prudential expectations these regulators have signalled for AI systems. Every structure is tailored to your organisation's size, sector, and AI maturity level. Every structure maintains the rigorous oversight that regulators, stakeholders, and the Companies Act 1993 demand from directors.

  • Board AI governance charter with NZ-specific provisions
  • Committee mandate design or expansion recommendations
  • AI risk escalation and decision authority matrix
  • Board-ready AI reporting templates and dashboards

What Your Board Receives

Tangible outputs from our team that change how your board governs AI, not a slide deck that gathers dust after the strategy offsite. Every deliverable is designed to strengthen compliance and risk management across your organisation.

Board AI Governance Charter

A formal charter defining the board's AI oversight role, delegated authorities, and reporting requirements, drafted for your constitution and committee structure.

Director Liability Briefing Pack

A confidential reference document mapping your organisation's AI footprint against Companies Act duties, with specific liability scenarios and mitigation actions.

Treaty Compliance Board Report

Assessment of how your AI systems interact with Māori data and communities, with board-level recommendations aligned to Te Tiriti obligations and data sovereignty principles.

FMA/RBNZ Readiness Assessment

Gap analysis of your AI governance against current FMA conduct expectations and RBNZ prudential risk standards, with a prioritised remediation roadmap.

AI Risk Register for Directors

A board-level risk register categorising every AI system by risk tier, with oversight requirements and escalation triggers appropriate for director consumption.

Quarterly Regulatory Briefings

Ongoing updates on NZ regulatory developments, Privacy Commissioner guidance, and international AI governance trends relevant to your sector and obligations.

Board Question Framework

A structured set of questions directors should ask management about AI deployments, organised by risk category and designed to demonstrate informed oversight.

Annual Governance Review

Yearly assessment of your AI governance maturity against evolving NZ expectations, with recommendations for the coming year's governance programme.

Boards We Work With

Different organisations face different AI governance pressures. NZX-listed companies navigate the Corporate Governance Code alongside Companies Act 1993 duties. Crown entities balance Public Service AI Framework requirements with Treaty of Waitangi obligations. FMA and RBNZ-regulated businesses face sector-specific conduct and prudential expectations. We tailor every engagement to your regulatory exposure, organisational scale, and AI maturity, reflecting the realities of operating in Aotearoa.

NZX-Listed and Large Private Companies

For boards navigating the NZX Corporate Governance Code alongside AI adoption. Whether you are deploying AI in customer-facing products, internal operations, or strategic decision support, your directors need governance frameworks that satisfy both the Code's principles and your Companies Act 1993 duties, including section 131's good faith requirement and section 137's standard of care. We build governance structures that scale with your ambitions, ensure compliance with the Privacy Act 2020 for AI-driven data processing, and demonstrate the informed oversight that institutional investors and regulators increasingly expect from listed companies.

Crown Entities and Government Organisations

Public sector boards face additional layers that private sector directors do not encounter. The Public Service AI Framework sets explicit expectations for AI deployment. Cabinet expectations on algorithmic transparency demand documented decision-making processes. The Algorithm Charter creates voluntary but reputationally binding commitments. And Treaty of Waitangi obligations are non-negotiable rather than aspirational, requiring genuine partnership with Māori in AI system design, deployment, and oversight. We help Crown entity boards meet these requirements while enabling the AI-driven efficiency gains government agencies are expected to deliver under the National AI Strategy. Governance must balance practical delivery with the unique obligations of serving all New Zealanders, including Māori data governance and equitable outcome monitoring.

FMA and RBNZ Regulated Entities

Financial services boards face the most immediate regulatory pressure on AI governance in New Zealand. The FMA's conduct expectations increasingly encompass algorithmic decision-making, AI-assisted customer advice, and automated compliance processes, with a focus on fair customer outcomes that mirrors the scrutiny applied to human decisions. The RBNZ's prudential focus is expanding to encompass model risk from machine learning systems, operational resilience dependencies on AI infrastructure, and concentration risk from third-party AI providers. If your organisation uses AI for credit decisions, claims processing, fraud detection, or customer advice, board-level governance is not optional. It is what regulators will ask to see. We help financial services boards build governance that satisfies both the FMA and RBNZ while meeting Companies Act 1993 duties and Privacy Act 2020 obligations. Effective oversight in this sector requires boards that understand where AI creates value and where it creates exposure.

Organisations with Māori Community Impact

If your AI systems collect, analyse, or make decisions using data about Māori communities, your board has governance obligations that go beyond the Privacy Act 2020. Te Tiriti o Waitangi creates expectations of partnership, protection, and participation that cannot be satisfied by a privacy impact assessment alone. We work with boards to integrate Treaty principles into AI governance in a way that is substantive, not performative, ensuring data kaitiakitanga, rangatiratanga over information, and meaningful iwi consultation are reflected in how AI is overseen at the highest level. This is about building governance that honours the constitutional fabric of New Zealand. We bring both the governance expertise and the cultural understanding that Māori data governance demands.

Common Questions From NZ Directors

How does the Companies Act 1993 create personal liability for AI governance failures?

Sections 131 through 138 of the Companies Act 1993 impose duties on directors to act in good faith, with care, diligence, and skill, and to avoid reckless trading. These duties are technology-agnostic. They apply to every material risk your organisation faces, including AI. If an AI system causes financial loss, privacy breaches under the Privacy Act 2020, or discriminatory outcomes that attract Human Rights Act scrutiny, and the board did not exercise reasonable oversight, individual directors may be held personally liable. The standard is what a reasonable director in the same circumstances would have done, and in 2026, a reasonable director is expected to have some form of AI governance framework in place. As more organisations establish board-level AI oversight, the threshold for "reasonable" rises. Boards without any governance structures for AI will find it increasingly difficult to argue they met the standard of care. We help directors understand exactly where their personal exposure sits and what steps satisfy the Companies Act's requirements.

How do Te Tiriti obligations apply at board level for AI?

Te Tiriti o Waitangi creates obligations of partnership, protection, and participation. For AI governance, this means boards must consider who holds rangatiratanga over data collected from or about Māori, whether AI systems perpetuate existing inequities affecting Māori communities, and whether meaningful consultation has occurred before deploying AI in areas with Māori impact. These are not operational decisions that can be delegated to project teams. They require board-level attention, and for Crown entities they are legally grounded obligations rather than voluntary best practice.

What does the FMA expect from boards regarding AI governance?

The FMA has not issued AI-specific regulations, but its conduct expectations already encompass technology systems that affect customers. If your organisation uses AI in financial advice, credit assessment, insurance underwriting, or customer interactions, the FMA expects the board to have oversight of those systems' fairness, transparency, and consumer outcomes. We help boards understand where their AI deployments intersect with FMA expectations and build governance that demonstrates proactive oversight.

Should our board create a dedicated AI committee or expand an existing one?

There is no single right answer. For organisations where AI is transformative to the business model, a dedicated technology and AI committee may be warranted. For most NZ organisations, expanding the mandate of the risk committee or audit and risk committee is more practical. What matters is that AI governance has a clear home within your board structure, with defined escalation paths and reporting cadences. We assess your current committee workload, AI maturity, and strategic direction to recommend the structure that will actually function, not just look good in an annual report.

NZ AI governance is mostly voluntary. Why invest now?

Precisely because it is voluntary. When New Zealand does not prescribe specific AI governance requirements, courts and regulators will look at what comparable organisations were doing to determine the standard of care under the Companies Act 1993. Boards that establish governance frameworks early are not just protecting their organisations. They are shaping the benchmark against which future compliance will be measured. The National AI Strategy, published in July 2025 as the last OECD country to formalise its AI position, signals a direction of travel toward greater regulatory structure. The FMA, RBNZ, and Privacy Commissioner are all signalling increased scrutiny of AI in their respective domains. The OECD AI Principles that underpin the strategy create expectations that flow through to every sector. 76% of NZ leaders are prioritising AI agents, and one in four says governance is the missing link. Organisations that wait for mandatory requirements will be building governance under pressure, public scrutiny, and potentially enforcement action, rather than ahead of it. Early investment in governance is simply good directorship.

Your Board Approved AI Adoption. Now It Needs Governance.

A 90-minute director briefing is where most boards start. We walk through your Companies Act 1993 obligations, your current AI exposure, Privacy Act 2020 implications, and the governance gaps that create personal liability. We cover FMA and RBNZ expectations relevant to your sector, Treaty of Waitangi considerations for Māori data governance, and the practical steps that demonstrate informed oversight. No sales pitch, just the information directors need to make informed decisions.

Complimentary 90-minute briefing for boards considering AI governance engagement