Healthcare AI Governance

Healthcare AI Governance for Aotearoa New Zealand

Research from Waitemata Healthcare found that international AI governance frameworks are inappropriate for clinical practice in Aotearoa. NZ's health system requires governance that accounts for the Health Information Privacy Code, the Code of Health and Disability Services Consumers' Rights, Te Tiriti obligations, and the distinct health needs of Maori and Pacific communities. We build that governance.

See Our Healthcare AI Services
Healthcare AI Clinical Governance Dashboard for New Zealand

Why Off-the-Shelf AI Governance Fails in NZ Healthcare

Most AI governance frameworks originate from the US, UK, or EU. They assume regulatory structures, population demographics, and health system models that do not reflect how healthcare works in Aotearoa.

International models miss the NZ context

Waitemata Healthcare's work on AI governance for clinical practice demonstrated that frameworks developed overseas do not account for New Zealand's regulatory environment, population health profile, or Te Whatu Ora's operational structure. Importing them wholesale creates governance gaps where they matter most -- at the point of patient care.

Cultural safety is not optional

AI tools trained on overseas datasets can produce outputs that are clinically unsafe for Maori and Pacific patients. Algorithmic bias in risk prediction, diagnostic imaging, and treatment recommendations compounds existing health inequities. Governance must embed cultural safety assessment from the outset, not as an afterthought.

Accountability gaps remain unresolved

When an AI clinical tool fails, who is responsible? When the company behind the tool is acquired, what happens to patient data? Who monitors ongoing performance? Who manages conflicts of interest? These questions do not have clear answers in most healthcare AI deployments. Our governance frameworks address them directly.

The Regulatory Landscape for Clinical AI in Aotearoa

NZ healthcare AI operates under a layered regulatory framework. The Health Information Privacy Code sits at the centre, with consumer rights, medical device regulation, and clinical governance obligations creating overlapping requirements that generic AI policies do not cover.

HIPC

Health Information Privacy Code 2020

Primary framework for health data

  • Stricter than Privacy Act for health information -- additional rules on collection, use, and disclosure
  • AI processing of consultation recordings, clinical notes, and patient data requires specific HIPC-compliant consent
  • Overseas transfer restrictions apply when AI vendors process data offshore
HDC

Code of Health and Disability Services Consumers' Rights

Patient rights and informed consent

  • Right 6: consumers have the right to information needed to give informed consent, including AI involvement in their care
  • Right 7: informed consent must cover the use of AI in diagnosis and treatment recommendations
  • Right 4: services must meet reasonable standards of care, including AI-assisted services
Medsafe

Medsafe

Medical device regulation

  • Software as a Medical Device (SaMD) classification for clinical AI tools
  • Diagnostic and treatment recommendation tools may require registration
  • Post-market surveillance and quality management system requirements

Te Whatu Ora Clinical Governance

System-level patient safety

  • Clinical validation before AI tools are used in patient care
  • Ongoing monitoring for performance drift and bias emergence
  • Incident reporting and response protocols for AI-related adverse events

Clinical AI Across the NZ Health System

AI is moving into clinical practice across every part of Aotearoa's health system. Each setting brings different governance challenges depending on scale, patient populations, and regulatory exposure.

Te Whatu Ora and Public Hospitals

System-level AI governance that integrates with existing clinical governance, quality improvement, and equity commitments. Frameworks that work across departments and specialties.

Primary Care and General Practice

AI scribes and clinical decision support are spreading rapidly through general practice. Practitioners need clear policies for consent, verification, and HIPC compliance that fit within the realities of a busy clinic.

Healthtech Companies and Medical Device Manufacturers

Startups and established companies building AI-powered clinical tools need governance that satisfies Medsafe, meets HIPC requirements, and demonstrates cultural safety to NZ healthcare purchasers.

Private Hospitals and Aged Care Providers

Southern Cross, aged residential care facilities, and specialist clinics deploying AI for clinical and operational purposes. Governance must address both patient safety and commercial considerations.

Clinical AI Governance Across NZ Health System

Governance Designed for Aotearoa's Health System

Our approach follows the principle established by Waitemata Healthcare: clinical AI governance must be context-specific and population-appropriate. Every framework we develop is built from the NZ regulatory and clinical environment outward, not adapted from overseas templates.

HIPC Compliance Assessment

  • Map all health information flows through AI systems
  • Assess HIPC rule compliance for each processing activity
  • Evaluate overseas transfer risks for cloud-based AI vendors
  • Design consent processes that meet HIPC and Consumer Rights Code requirements

Cultural Safety Governance

  • Assess AI tool performance across Maori and Pacific patient populations
  • Identify training data gaps that create bias against NZ populations
  • Develop monitoring protocols for equity impact
  • Embed Te Tiriti considerations into AI governance structures

Clinical AI Translation Protocols

  • Validate AI outputs against NZ clinical practice standards
  • Define human oversight requirements for each clinical use case
  • Establish practitioner competency frameworks for AI-assisted care
  • Create escalation and override procedures for AI recommendations

Accountability Frameworks

  • Define responsibility allocation when AI tools fail or produce harm
  • Establish vendor contractual protections for data upon acquisition or sale
  • Design conflict of interest management for AI monitoring roles
  • Create IP sharing provisions that protect organisational and patient interests

Vendor Risk and Data Protection

  • Assess AI vendor data handling practices against HIPC requirements
  • Evaluate data protection if AI company is sold or changes ownership
  • Negotiate contractual terms for ongoing data governance obligations
  • Develop exit strategies and data return provisions

Ongoing Monitoring Programmes

  • Define who is responsible for ongoing AI performance monitoring
  • Establish metrics for clinical accuracy, bias detection, and drift
  • Create incident response and reporting processes for AI failures
  • Build review cycles aligned with clinical governance reporting

Healthcare AI Governance Questions From NZ Organisations

How is the HIPC different from the Privacy Act for AI governance purposes?

The Health Information Privacy Code 2020 modifies several of the Privacy Act's Information Privacy Principles specifically for health information. It imposes stricter rules on collection, use, disclosure, and overseas transfer of health data. For AI systems, this means HIPC-specific consent requirements, tighter restrictions on secondary use of clinical data for model training, and additional obligations when health information is processed by overseas AI vendors. You cannot rely on a generic Privacy Act compliance programme -- health AI governance must address the HIPC rules directly.

What does the Waitemata governance model mean for our organisation?

Waitemata Healthcare's research established that AI governance frameworks developed internationally are not appropriate for NZ clinical practice without significant adaptation. The key insight is that governance must be context-specific: built around NZ's regulatory environment, population health needs, and health system structure. In practice, this means your AI governance should be developed from your clinical context and patient population outward, rather than adopting a framework designed for a US hospital system or a UK NHS trust and hoping it fits.

How do we address cultural safety in clinical AI tools?

Most clinical AI tools are trained on datasets that underrepresent Maori and Pacific populations. This can produce biased risk scores, inaccurate diagnostic suggestions, and treatment recommendations that do not account for population-specific health patterns. Cultural safety governance requires assessing training data representativeness, testing tool performance across ethnic groups, monitoring for differential outcomes, and building review processes that include cultural expertise. This is not a one-time assessment -- it requires ongoing monitoring as AI models update.

What are our obligations under the Consumer Rights Code when using AI?

The Code of Health and Disability Services Consumers' Rights gives patients the right to be fully informed about the services they receive, including when AI is involved in their care. Right 6 (right to information) and Right 7 (informed consent) mean you need to tell patients when AI is being used, what role it plays in their diagnosis or treatment, and what the limitations are. This applies to AI scribes recording consultations, clinical decision support tools influencing treatment plans, and diagnostic imaging AI. Generic consent forms are unlikely to be sufficient.

What happens to our patient data if the AI vendor is acquired?

This is one of the most under-addressed risks in healthcare AI. If the company behind your AI tool is sold, merged, or goes into receivership, the contractual protections for patient data may not transfer automatically. Your governance framework needs to include vendor risk provisions that address change of ownership: what happens to the data, whether new owners inherit the same obligations, what your rights are to retrieve or destroy data, and whether HIPC compliance continues under new ownership. We help organisations build these protections into vendor agreements before deployment.

Healthcare AI Governance Built for Aotearoa

International frameworks were not designed for NZ's regulatory environment, population health needs, or Te Tiriti obligations. We develop clinical AI governance that addresses HIPC compliance, cultural safety, accountability, and the specific questions your organisation needs answered before deploying AI in patient care.

Explore Healthcare AI Services