Understand what could go wrong before you deploy artificial intelligence
Comprehensive risk assessment designed for New Zealand organisations. We identify operational, privacy, ethical, and regulatory risks before they become costly problems, aligned with the Privacy Act 2020, FMA and RBNZ expectations, and Te Tiriti o Waitangi obligations.
New Zealand's light-touch regulatory approach means your organisation bears primary responsibility for identifying and managing AI risks. With 25% of organisations saying governance is the "missing link" and regulators watching closely, proactive risk assessment is the foundation of responsible AI adoption.
The challenge for New Zealand organisations
Your organisation is deploying AI, but traditional risk frameworks were not built to capture AI-specific risks like algorithmic bias, model drift, explainability failures, or Māori data sovereignty concerns. Aotearoa New Zealand's regulatory environment adds layers that generic international frameworks miss entirely.
Generic risk frameworks miss AI-specific risks
Your enterprise risk register captures operational and cyber risks, but AI introduces entirely new categories: training data quality and representativeness, model drift over time, algorithmic bias against Māori and Pacific communities, explainability failures in automated decisions, and cultural impacts that standard frameworks never anticipated. Without AI-specific assessment, these risks remain invisible until they cause harm.
Regulatory expectations are evolving rapidly
The Privacy Act 2020 does not mention AI explicitly. The Financial Markets Authority and Reserve Bank of New Zealand have not issued AI-specific rules. The Public Service AI Framework is voluntary. The OECD AI Principles provide direction but lack prescriptive detail. How do you assess regulatory compliance risk when the rules are still forming? Our consultants help you navigate this ambiguity and prepare for what comes next.
Treaty of Waitangi obligations demand specialised assessment
If your AI systems process Māori data or affect Māori communities, you have Te Tiriti o Waitangi obligations to consider. Cultural risks, Māori data sovereignty concerns, and the principle of kaitiakitanga require assessment frameworks grounded in te ao Māori. Most risk teams lack these specialised capabilities, and international frameworks provide no guidance whatsoever on these uniquely New Zealand requirements.
Why risk assessment matters in New Zealand's regulatory landscape
Aotearoa New Zealand released its first National AI Strategy in July 2025, making it the last OECD country to do so. The strategy adopted a light-touch, principles-based regulatory approach aligned with the OECD AI Principles. This means businesses and government agencies must self-assess their AI risks rather than follow a prescriptive compliance checklist.
The Privacy Commissioner enforces the Privacy Act 2020 and its 13 Information Privacy Principles across all data processing, including AI systems. The FMA has committed to ensuring financial innovations are introduced responsibly, while the RBNZ expects regulated entities to manage AI-related risks under existing prudential obligations. Neither regulator has issued AI-specific rules, but both are actively studying AI adoption in their sectors and expect organisations to demonstrate diligence.
For organisations handling Māori data or delivering services to Māori communities, the Treaty of Waitangi creates additional obligations around data sovereignty, cultural safety, and the protection of mana. The Algorithm Charter for Aotearoa provides government agencies with principles for transparent and accountable use of algorithms, but translating these principles into practical risk controls requires specialist expertise.
Our approach to AI risk assessment
Our team assesses AI risk across five dimensions: operational, privacy, ethical, regulatory, and cultural. Not theoretical risk scoring, but practical assessment of what could actually go wrong in the New Zealand context and what controls your organisation needs.
Understand the AI system and its context
We examine what the AI system does, how it works, what data it uses, and what decisions it informs or automates. We document the technical architecture, data pipelines, model characteristics, and integration points. We also examine the organisational context: who procured the system, what governance exists, and how it interacts with New Zealand's regulatory requirements, including the Privacy Act 2020 and any sector-specific obligations under the FMA or RBNZ.
Deliverable: System documentation, data flow diagrams, technical assessment, regulatory mapping
Identify AI-specific risks across five dimensions
We assess risks across five categories tailored to New Zealand organisations: operational risks (model failure, data quality, vendor dependency, concentration risk), privacy risks (Privacy Act 2020 compliance, cross-border data flows, automated decision-making), ethical risks (algorithmic bias against Māori and Pacific populations, fairness, transparency), regulatory risks (FMA/RBNZ expectations, sector-specific obligations, OECD AI Principles alignment), and cultural risks (Treaty of Waitangi obligations, Māori data sovereignty, kaitiakitanga).
Deliverable: Comprehensive risk register with severity ratings and likelihood assessments
Assess impact and likelihood with scenario analysis
For each identified risk, our consultants assess potential impact across multiple dimensions: financial loss, reputational damage, regulatory enforcement by the Privacy Commissioner or sector regulators, customer and community harm, and cultural harm to Māori and Pacific communities. We use scenario analysis to understand what failure looks like and how it would cascade through your organisation. This provides boards and executives with clear, actionable risk intelligence.
Deliverable: Risk heat map, scenario analysis documentation, impact assessment report
Recommend practical controls and mitigation strategies
We design practical controls to reduce risk to acceptable levels: technical controls (model monitoring, bias testing, validation frameworks), process controls (approval workflows, human oversight, escalation protocols), governance controls (policies, training, incident response procedures), and cultural controls (iwi consultation processes, Māori data governance protocols). We prioritise recommendations based on risk severity, implementation feasibility, and alignment with your existing governance structures.
Deliverable: Control framework, prioritised implementation roadmap, residual risk assessment
Five dimensions of AI risk
Our comprehensive risk assessment framework covers all categories relevant to New Zealand organisations, from Privacy Act compliance through to Treaty of Waitangi obligations.
Operational Risks
- • Model accuracy degradation and performance drift over time
- • Training data quality, completeness, and representativeness for New Zealand populations
- • Vendor lock-in, service continuity, and concentration risk from reliance on a small number of AI providers
- • Integration failures with existing systems and business processes
- • Inadequate exit strategies and data portability provisions
Privacy Risks
- • Privacy Act 2020 compliance gaps across the 13 Information Privacy Principles
- • Unauthorised collection, use, or disclosure of personal information
- • Cross-border data transfers to offshore AI vendors without adequate safeguards
- • Data retention, deletion challenges, and individual access rights complications
- • Privacy Commissioner enforcement risk from automated decision-making
Ethical Risks
- • Algorithmic bias and discrimination against Māori, Pacific, and other communities
- • Lack of transparency and explainability in AI-driven decisions
- • Inappropriate automation of sensitive decisions affecting individuals
- • Missing human oversight mechanisms and escalation pathways
- • Misalignment with OECD AI Principles on fairness and accountability
Regulatory Risks
- • FMA and RBNZ expectations for financial services organisations using AI
- • Public Service AI Framework alignment gaps for government agencies
- • Health Information Privacy Code 2020 compliance for healthcare AI
- • Fair Trading Act 1986 exposure from misleading AI-generated content or claims
- • Companies Act 1993 director liability for inadequate AI oversight
Cultural and Treaty Risks
Unique to Aotearoa New Zealand, cultural risk assessment examines how AI systems interact with Treaty of Waitangi obligations, Māori data sovereignty, and indigenous rights. This dimension is absent from international risk frameworks but essential for any organisation operating in New Zealand.
- • Māori data sovereignty and control (rangatiratanga) over information about Māori individuals and communities
- • Cultural harm from AI decisions that disproportionately affect Māori communities or undermine mana
- • Absence of iwi and hapū consultation where Treaty partnership obligations apply
- • Training data that fails to reflect te ao Māori perspectives or perpetuates bias
- • Failure to apply kaitiakitanga principles to the guardianship of data and AI systems
Industry-specific risk assessment for New Zealand
Our team tailors AI risk assessment to the regulatory and operational context of your industry. Every sector in Aotearoa has distinct risk drivers that demand specialised approaches.
Financial services
The FMA has been actively studying AI adoption across asset management, banking, financial advice, and insurance. The RBNZ has identified vendor concentration risk, market distortion, and systemic risk from interconnected AI systems as key concerns. Our risk assessment addresses operational resilience under the Conduct of Financial Institutions (CoFI) Act, model risk management for credit and trading algorithms, and Fair Dealing provisions under the Financial Markets Conduct Act.
Healthcare
Healthcare AI carries unique risks around patient safety, clinical accuracy, and health information privacy. We assess against the Health Information Privacy Code 2020, Medsafe Software as a Medical Device classification, and the Code of Health and Disability Services Consumers' Rights. Research from Waitematā Healthcare has demonstrated that international AI frameworks are inappropriate for Aotearoa New Zealand's healthcare context, reinforcing the need for locally grounded risk assessment.
Government and public sector
Government agencies face overlapping obligations under the Public Service AI Framework, Privacy Act 2020, Government Procurement Rules, the Algorithm Charter for Aotearoa, and Treaty of Waitangi requirements. Our risk assessment helps agencies evaluate AI systems against all relevant frameworks simultaneously, with practical approaches that align with existing procurement and governance processes.
Technology and SaaS businesses
Auckland's growing technology sector is both developing and deploying AI at pace. Technology businesses need risk assessment that considers Privacy Act obligations for AI products, intellectual property implications, bias in AI outputs, and the governance expectations of enterprise and government customers. We help technology organisations build risk assessment into their development lifecycle, demonstrating responsible AI practices to customers and regulators alike.
Who this is for
Organisations deploying high-risk AI
AI systems making decisions about individuals, handling sensitive personal information, or operating in regulated industries need comprehensive risk assessment before deployment. This includes financial services firms using AI for credit decisions, healthcare organisations deploying clinical AI, and government agencies automating service delivery decisions.
Risk and compliance teams
Internal risk teams who need to assess AI systems but lack AI-specific risk assessment frameworks and expertise. We provide the specialised methodology, templates, and training that enable your team to assess AI risks independently going forward.
Boards and executives
Leadership seeking independent assessment of AI risks to inform deployment decisions and satisfy Companies Act 1993 director duties. Under New Zealand law, directors have oversight obligations that extend to the governance of AI systems their organisations deploy.
Organisations handling Māori data
Any organisation using AI with Māori data or affecting Māori communities needs to assess cultural risks and Treaty of Waitangi obligations. This includes government agencies, healthcare providers, education institutions, and businesses delivering services to Māori whānau, hapū, and iwi across Aotearoa New Zealand.
Frequently asked questions
How is AI risk assessment different from standard risk assessment?
AI introduces unique risk categories that traditional frameworks miss: algorithmic bias, explainability requirements, training data quality, model drift, vendor concentration, and cultural impacts on Māori and Pacific communities. In New Zealand, risk assessment must also account for Privacy Act 2020 compliance across automated decision-making, Treaty of Waitangi obligations, and the expectations of sector regulators like the FMA and RBNZ. Our team assesses these alongside standard operational and regulatory risks.
Do we need a risk assessment for every artificial intelligence system?
A risk-based approach is appropriate, consistent with New Zealand's light-touch regulatory philosophy. High-risk systems that make significant decisions about individuals, process sensitive personal information, or affect vulnerable populations need comprehensive assessment. Lower-risk productivity tools need lighter-touch review. Our consultants help you categorise systems appropriately and match assessment rigour to actual risk levels.
How long does an AI risk assessment take?
It depends on system complexity and the breadth of assessment required. Simple vendor-provided tools can be assessed in one to two weeks. Custom-built systems with sensitive use cases, cross-border data flows, or Māori data sovereignty considerations may take four to six weeks. We scope every assessment based on the risk level, complexity, and regulatory context specific to your organisation.
Can you assess AI systems we are procuring, not just building?
Yes. The majority of AI risk assessment work in New Zealand is for vendor-provided tools, from large platforms to specialised solutions. We assess based on vendor documentation, contracts, security questionnaires, and data processing agreements, identifying what additional information or contractual protections your organisation needs before deployment.
How does risk assessment relate to ISO 42001 certification?
ISO 42001 requires systematic AI risk assessment as a core component of an AI Management System. Our risk assessment methodology aligns with the standard's requirements, which means the outputs can feed directly into an ISO 42001 certification programme. For organisations in Auckland, Wellington, and Christchurch pursuing certification, this creates a clear pathway from initial risk assessment through to formal certification.
Related services
AI Governance Consulting
Risk assessment is one component of comprehensive AI governance. Build frameworks that manage risk systematically across all AI deployments in your organisation.
Learn more →Privacy Act 2020 Compliance
Privacy risk is a critical component of AI risk. Ensure your systems comply with the 13 Information Privacy Principles and address automated decision-making requirements under New Zealand law.
Learn more →ISO 42001 Certification
ISO 42001 requires systematic AI risk assessment as part of your AI Management System. Our methodology aligns with the standard's risk requirements for a seamless certification pathway.
Learn more →Ready to understand your AI risks?
Schedule a consultation with our team to discuss your AI systems and receive a comprehensive risk assessment tailored to New Zealand's regulatory landscape, cultural obligations, and your organisation's specific context.