Healthcare AI Governance

Healthcare AI Governance for Aotearoa New Zealand

Research from Waitematā Healthcare found that international artificial intelligence governance frameworks are inappropriate for clinical practice in Aotearoa. New Zealand's health system requires governance that accounts for the Health Information Privacy Code 2020, the Code of Health and Disability Services Consumers' Rights, Te Tiriti o Waitangi obligations, and the distinct health needs of Māori and Pacific communities. Our team builds that governance, not adapted from overseas templates, but developed from the ground up for organisations operating within this unique context.

See Our Healthcare AI Services
Healthcare AI Clinical Governance Dashboard for New Zealand

Why Off-the-Shelf Artificial Intelligence Governance Fails in NZ Healthcare

Most AI governance frameworks originate from the US, UK, or EU. They assume regulatory structures, population demographics, and health system models that do not reflect how healthcare works in Aotearoa New Zealand. Businesses selling AI tools to the NZ health sector cannot rely on these imported approaches either.

International models miss the NZ context

Waitematā Healthcare's published research on AI governance for clinical practice translation demonstrated that frameworks developed overseas do not account for New Zealand's regulatory environment, population health profile, or Te Whatu Ora's operational structure. Importing them wholesale creates governance gaps where they matter most: at the point of patient care. Our consultants develop context-specific governance that addresses these gaps.

Cultural safety is not optional in Aotearoa

AI tools trained on overseas datasets can produce outputs that are clinically unsafe for Māori and Pacific patients. Algorithmic bias in risk prediction, diagnostic imaging, and treatment recommendations compounds existing health inequities. The principle of kaitiakitanga demands that governance embed cultural safety assessment from the outset, not as an afterthought. Te Tiriti o Waitangi requires it.

Accountability gaps remain unresolved

When an AI clinical tool fails, who is responsible? When the company behind the tool is acquired, what happens to patient data? Who monitors ongoing performance? Who manages conflicts of interest? These questions do not have clear answers in most healthcare AI deployments in New Zealand. Our governance frameworks address them directly, built around the responsibilities that Medsafe, the Privacy Commissioner, and Te Whatu Ora expect organisations to meet.

The Regulatory Landscape for Clinical AI in Aotearoa

Healthcare AI in New Zealand operates under a layered regulatory framework. The Health Information Privacy Code sits at the centre, with consumer rights, Medsafe medical device regulation, and clinical governance obligations creating overlapping requirements that generic AI policies do not cover. Our team helps organisations navigate each layer.

HIPC

Health Information Privacy Code 2020

Primary framework for health data in New Zealand

  • Stricter than the Privacy Act 2020 for health information, with additional rules on collection, use, and disclosure that apply to every AI system processing clinical data
  • AI processing of consultation recordings, clinical notes, and patient data requires specific HIPC-compliant consent mechanisms
  • Overseas transfer restrictions apply when AI vendors process data offshore, a critical consideration for cloud-based AI tools
HDC

Code of Health and Disability Services Consumers' Rights

Patient rights and informed consent

  • Right 6: consumers have the right to information needed to give informed consent, including when AI is involved in their care
  • Right 7: informed consent must cover the use of AI in diagnosis and treatment recommendations
  • Right 4: services must meet reasonable standards of care, including AI-assisted services, a standard that applies across New Zealand's health sector
Medsafe

Medsafe Medical Device Regulation

NZ's medicines and medical devices regulator

  • Software as a Medical Device (SaMD) classification applies to clinical AI tools, and businesses building diagnostic or treatment tools must determine whether Medsafe registration is required
  • Diagnostic imaging AI and treatment recommendation tools may require registration and post-market surveillance
  • Quality management system requirements apply to the ongoing monitoring and lifecycle of regulated AI systems

Te Whatu Ora Clinical Governance

Health New Zealand system-level patient safety

  • Clinical validation before AI tools are used in patient care across Aotearoa
  • Ongoing monitoring for performance drift, bias emergence, and equity impacts on Māori and Pacific populations
  • Incident reporting and response protocols for AI-related adverse events, aligned with Te Whatu Ora's quality and safety frameworks

Clinical AI Across the NZ Health System

AI is moving into clinical practice across every part of Aotearoa's health system. Each setting brings different governance challenges depending on scale, patient populations, and regulatory exposure. Our consultants bring sector-specific expertise to help each type of organisation build governance that fits its operational reality.

Te Whatu Ora and Public Hospitals

System-level AI governance that integrates with existing clinical governance, quality improvement, and equity commitments. Our approach works across departments and specialties, ensuring Health New Zealand's transformation agenda is supported by robust governance that protects patients across the system.

Primary Care and General Practice

AI scribes and clinical decision support tools are spreading rapidly through general practice across New Zealand. Practitioners need clear policies for consent, verification, and Health Information Privacy Code compliance that fit within the realities of a busy clinic. Our team develops practical governance that does not slow down patient care.

Healthtech Companies and Medical Device Manufacturers

Startups and established businesses building AI-powered clinical tools need governance that satisfies Medsafe, meets HIPC requirements, and demonstrates cultural safety to NZ healthcare purchasers. Our governance work positions healthtech organisations to win procurement evaluations from Te Whatu Ora and private providers.

Private Hospitals and Aged Care Providers

Southern Cross Healthcare, aged residential care facilities, and specialist clinics deploying AI for clinical and operational purposes. Governance must address both patient safety and commercial considerations, ensuring Privacy Act 2020 compliance alongside quality care obligations.

Clinical AI Governance Across NZ Health System

Governance Designed for Aotearoa's Health System

Our approach follows the principle established by Waitematā Healthcare: clinical AI governance must be context-specific and population-appropriate. Every framework we develop is built from the New Zealand regulatory and clinical environment outward, not adapted from overseas templates. This is what sets our approach apart.

HIPC Compliance Assessment

  • Map all health information flows through AI systems
  • Assess Health Information Privacy Code rule compliance for each processing activity
  • Evaluate overseas transfer risks for cloud-based AI vendors operating outside New Zealand
  • Design consent processes that meet both HIPC and Consumer Rights Code requirements

Cultural Safety Governance

  • Assess AI tool performance across Māori and Pacific patient populations
  • Identify training data gaps that create bias against NZ populations
  • Develop monitoring protocols for equity impact grounded in kaitiakitanga
  • Embed Te Tiriti o Waitangi considerations into AI governance structures

Clinical AI Translation Protocols

  • Validate AI outputs against NZ clinical practice standards
  • Define human oversight requirements for each clinical use case
  • Establish practitioner competency frameworks for AI-assisted care
  • Create escalation and override procedures for AI recommendations

Accountability Frameworks

  • Define responsibility allocation when AI tools fail or produce harm
  • Establish vendor contractual protections for data upon acquisition or sale
  • Design conflict of interest management for AI monitoring roles
  • Create IP sharing provisions that protect organisational and patient interests

Vendor Risk and Data Protection

  • Assess AI vendor data handling practices against HIPC and Privacy Act 2020 requirements
  • Evaluate data protection if the AI company is sold or changes ownership
  • Negotiate contractual terms for ongoing data governance obligations
  • Develop exit strategies and data return provisions for New Zealand organisations

Ongoing Monitoring Programmes

  • Define who is responsible for ongoing AI performance monitoring
  • Establish metrics for clinical accuracy, bias detection, and drift
  • Create incident response and reporting processes for AI failures
  • Build review cycles aligned with clinical governance reporting across the organisation

Healthcare AI Governance Questions From NZ Organisations

How is the HIPC different from the Privacy Act 2020 for AI governance purposes?

The Health Information Privacy Code 2020 modifies several of the Privacy Act's Information Privacy Principles specifically for health information. It imposes stricter rules on collection, use, disclosure, and overseas transfer of health data. For AI systems, this means HIPC-specific consent requirements, tighter restrictions on secondary use of clinical data for model training, and additional obligations when health information is processed by overseas AI vendors. You cannot rely on a generic Privacy Act compliance programme. Health AI governance must address the HIPC rules directly. Our team ensures your organisation meets both sets of requirements.

What does the Waitematā governance model mean for our organisation?

Waitematā Healthcare's research, published in Nature Digital Medicine, established that AI governance frameworks developed internationally are not appropriate for clinical practice in New Zealand without significant adaptation. The key insight is that governance must be context-specific: built around NZ's regulatory environment, population health needs, and health system structure. In practice, this means your AI governance should be developed from your clinical context and patient population outward, including the health needs of Māori and Pacific communities, rather than adopting a model designed for a US hospital system or a UK NHS trust.

How do we address Māori data governance in clinical AI tools?

Most clinical AI tools are trained on datasets that underrepresent Māori and Pacific populations. This can produce biased risk scores, inaccurate diagnostic suggestions, and treatment recommendations that do not account for population-specific health patterns. Cultural safety governance requires assessing training data representativeness, testing tool performance across ethnic groups, monitoring for differential outcomes, and building review processes that include cultural expertise grounded in kaitiakitanga. Te Tiriti o Waitangi requires this level of care for Māori health data. This is not a one-time assessment. It requires ongoing monitoring as AI models update.

What are our obligations under the Consumer Rights Code when using AI?

The Code of Health and Disability Services Consumers' Rights gives patients the right to be fully informed about the services they receive, including when AI is involved in their care. Right 6 (right to information) and Right 7 (informed consent) mean you need to tell patients when AI is being used, what role it plays in their diagnosis or treatment, and what the limitations are. This applies to AI scribes recording consultations, clinical decision support tools influencing treatment plans, and diagnostic imaging AI. Generic consent forms are unlikely to be sufficient under New Zealand's regulatory framework.

What happens to our patient data if the AI vendor is acquired?

This is one of the most under-addressed risks in healthcare AI across Aotearoa. If the company behind your AI tool is sold, merged, or goes into receivership, the contractual protections for patient data may not transfer automatically. Your governance framework needs to include vendor risk provisions that address change of ownership: what happens to the data, whether new owners inherit the same HIPC obligations, what your rights are to retrieve or destroy data, and whether compliance continues under new ownership. Our consultants help organisations build these protections into vendor agreements before deployment, protecting both the organisation and New Zealand patients.

Healthcare AI Governance Built for Aotearoa

International frameworks were not designed for New Zealand's regulatory environment, population health needs, or Te Tiriti o Waitangi obligations. Our team develops clinical AI governance that addresses HIPC compliance, cultural safety, Māori data governance, accountability, and the specific questions your organisation needs answered before deploying AI in patient care. We bring the expertise that protects patients and meets the expectations of Medsafe, the Privacy Commissioner, and Te Whatu Ora.

Explore Healthcare AI Services