AI Governance for Australia & New Zealand

We help regulated organisations across Australia and New Zealand build practical AI governance frameworks. No offshore playbooks. Just frameworks designed for how ANZ businesses and government agencies actually operate.

Our Mission

AI governance in Australia and New Zealand is evolving rapidly. Australian regulators like APRA, ASIC, and the OAIC are setting expectations for how AI should be governed in financial services, insurance, and healthcare. New Zealand takes a principles-based, voluntary approach shaped by Treaty obligations and the Privacy Act 2020.

Most global AI governance frameworks don't account for the regulatory nuances of either market. They miss APRA's CPS 230 operational resilience requirements, ASIC's responsible AI expectations, New Zealand's Māori data sovereignty principles, and the FMA/RBNZ regulatory landscape.

Our mission is to close that gap. We provide the frameworks, assessments, and implementation guidance that organisations in both countries need to build responsible AI governance - before regulation forces it and before boards start asking questions you can't answer.

AI Governance Scorecard

Two Markets, One Team

We operate across both sides of the Tasman, with deep understanding of each market's regulatory environment, cultural context, and governance expectations.

Australia

Australia's regulatory landscape for AI is shaped by APRA, ASIC, the OAIC, and emerging federal AI governance frameworks. Financial services, insurance, and superannuation organisations face increasing expectations around model risk management, operational resilience, and responsible AI.

APRA CPS 230 & CPS 234 compliance for AI systems
ASIC responsible AI expectations for financial services
Model governance and third-party AI risk management
Board-level AI governance and reporting frameworks

Sydney • Melbourne

New Zealand

New Zealand's approach to AI governance is principles-based and voluntary, shaped by the Privacy Act 2020, Treaty of Waitangi obligations, and the Public Service AI Framework. Organisations need practical interpretation of how existing laws apply to AI.

Privacy Act 2020 compliance for AI systems
Treaty of Waitangi and Māori data sovereignty integration
Public Service AI Framework implementation
FMA/RBNZ expectations for financial services AI

Auckland • Wellington

Our Commitment to Te Tiriti o Waitangi

The Treaty of Waitangi creates constitutional obligations that apply to AI governance in Aotearoa. This isn't an optional add-on or a checkbox exercise. It's a fundamental requirement for responsible AI in New Zealand.

Māori Data Sovereignty

When AI systems process Māori data, questions of data sovereignty, mana over information, and kaitiakitanga (guardianship) arise. We integrate Māori data governance principles into every framework we design for New Zealand organisations.

This includes consultation processes with Māori stakeholders, cultural impact assessment methodologies, and bias detection protocols for indigenous populations.

Te Ao Māori Perspectives in AI Ethics

AI governance frameworks need to incorporate te ao Māori worldviews, not just Western ethical principles. Concepts like whanaungatanga (relationships), manaakitanga (care and respect), and tikanga (protocol) inform how AI should be designed and deployed.

We work with organisations to embed these perspectives authentically, not superficially.

Cultural Safety in AI Applications

AI systems used in healthcare, social services, or government must be culturally safe for Māori and Pacific communities. This means assessing algorithmic bias, evaluating harm to specific populations, and designing mitigation strategies.

Cultural safety isn't achieved through generic fairness metrics. It requires context-appropriate methodologies built for Aotearoa.

Partnership Model

We approach Māori data governance through partnership, not consultation as an afterthought. This means involving Māori data experts, respecting tikanga throughout the process, and ensuring Treaty obligations are integrated from the start.

Government agencies and organisations handling Māori data need this partnership approach to meet their obligations under Te Tiriti.

Why Australia & New Zealand Need Regional AI Governance

Different Regulatory Landscapes

Australia has prescriptive regulators (APRA, ASIC) while New Zealand takes a principles-based, voluntary approach. One framework doesn't fit both - you need region-specific governance.

Treaty Obligations

New Zealand has unique constitutional obligations for indigenous data governance. Māori data sovereignty and te ao Māori perspectives must be integrated into AI frameworks - something no other country requires.

Privacy Frameworks

Australia's Privacy Act 1988 and New Zealand's Privacy Act 2020 both apply to AI systems processing personal information, but with different requirements, principles, and enforcement approaches.

Financial Regulators

APRA and ASIC in Australia, FMA and RBNZ in New Zealand - each with different expectations for AI in financial services, insurance, and superannuation.

ISO 42001 Certification

The international standard for AI management systems applies in both markets but requires region-specific implementation to address local regulatory requirements and cultural contexts.

Cross-Tasman Operations

Many organisations operate across both markets. We help build governance frameworks that satisfy regulators on both sides of the Tasman while maintaining consistency.

Want to Understand Our Approach?

30-minute conversation about how we build AI governance frameworks tailored to your regulatory environment - whether that's APRA/ASIC in Australia, Privacy Act/Treaty obligations in New Zealand, or both.