Know what could go wrong before you deploy AI
Comprehensive AI risk assessment that identifies operational, privacy, ethical, and regulatory risks before they become problems. Aligned with Privacy Act 2020, FMA/RBNZ expectations, and Treaty obligations.
The challenge
Your organisation is deploying AI, but traditional risk frameworks don't capture AI-specific risks like bias, explainability, or data sovereignty.
Generic risk frameworks miss AI-specific risks
Your enterprise risk register captures operational and cyber risks, but AI introduces new categories: training data quality, model drift, algorithmic bias, explainability failures, and cultural impacts. Standard risk frameworks don't identify these.
Regulatory risks keep changing
Privacy Act 2020 doesn't mention AI explicitly. FMA and RBNZ haven't issued AI-specific rules. The Public Service AI Framework is voluntary. How do you assess regulatory compliance risk when the rules are unclear?
Treaty obligations need assessment
If your AI processes Māori data or affects Māori communities, you have Treaty of Waitangi obligations to consider. Cultural risks and data sovereignty issues require specialised assessment frameworks that most risk teams don't have.
Our approach
We assess AI risk across five dimensions: operational, privacy, ethical, regulatory, and cultural. Not theoretical risk scoring, but practical assessment of what could actually go wrong.
Understand the AI system
We examine what the AI does, how it works, what data it uses, and what decisions it makes. We understand the technical architecture, data pipelines, model characteristics, and integration points - essential context for identifying risks.
Deliverable: System documentation, data flow diagrams, technical assessment
Identify AI-specific risks
We assess risks across five categories: operational (model failure, data quality, vendor dependency), privacy (Privacy Act 2020 compliance, data sovereignty), ethical (bias, fairness, transparency), regulatory (FMA/RBNZ expectations, sector-specific rules), and cultural (Treaty obligations, Māori data governance).
Deliverable: Risk register with severity ratings and likelihood assessments
Assess impact and likelihood
For each identified risk, we assess potential impact (financial, reputational, regulatory, customer harm, cultural harm) and likelihood of occurrence. We use scenario analysis to understand what failure looks like and how it would cascade through your organisation.
Deliverable: Risk heat map, scenario analysis, impact assessment
Recommend controls and mitigations
We design practical controls to reduce risk to acceptable levels: technical controls (monitoring, testing, validation), process controls (approval workflows, human oversight), and governance controls (policies, training, incident response). We prioritise based on risk severity and implementation feasibility.
Deliverable: Control framework, implementation roadmap, residual risk assessment
Five dimensions of AI risk
Comprehensive risk assessment covers all categories relevant to New Zealand organisations.
Operational Risks
- • Model accuracy degradation over time
- • Training data quality and completeness
- • Vendor lock-in and service continuity
- • Integration failures with existing systems
Privacy Risks
- • Privacy Act 2020 compliance gaps
- • Unauthorised data collection or use
- • Cross-border data transfers to AI vendors
- • Data retention and deletion challenges
Ethical Risks
- • Algorithmic bias and discrimination
- • Lack of transparency and explainability
- • Inappropriate automation of sensitive decisions
- • Missing human oversight mechanisms
Regulatory Risks
- • FMA/RBNZ expectations for financial services
- • Public Service AI Framework alignment (government)
- • Health Information Privacy Code compliance (healthcare)
- • Sector-specific regulatory scrutiny
Cultural and Treaty Risks
- • Māori data sovereignty and control (rangatiratanga)
- • Cultural harm from AI decisions affecting Māori communities
- • Lack of iwi consultation where appropriate
- • Training data that doesn't reflect te ao Māori perspectives
Who this is for
Organisations deploying high-risk AI
AI systems making decisions about individuals, handling sensitive data, or operating in regulated industries need comprehensive risk assessment before deployment.
Risk and compliance teams
Internal risk teams who need to assess AI systems but lack AI-specific risk assessment frameworks and expertise.
Boards and executives
Leadership seeking independent assessment of AI risks to inform deployment decisions and satisfy oversight obligations.
Organisations handling Māori data
Any organisation using AI with Māori data or affecting Māori communities, needing to assess cultural risks and Treaty obligations.
Frequently asked questions
How is AI risk assessment different from standard risk assessment?
AI introduces unique risk categories that traditional frameworks miss: algorithmic bias, explainability requirements, training data quality, model drift, and cultural impacts. We assess these alongside operational and regulatory risks.
Do we need a risk assessment for every AI system?
Risk-based approach is appropriate. High-risk systems (making significant decisions, processing sensitive data, affecting vulnerable populations) need comprehensive assessment. Lower-risk tools need lighter-touch review. We help you categorise systems appropriately.
How long does an AI risk assessment take?
Depends on system complexity. Simple vendor-provided tools can be assessed in 1-2 weeks. Custom-built systems with sensitive use cases may take 4-6 weeks. We scope assessments based on risk level and complexity.
Can you assess AI systems we're procuring, not just building?
Yes. Most AI risk assessment is for vendor-provided tools. We assess based on vendor documentation, contracts, and security questionnaires, identifying what additional information or contractual protections you need.
Related services
AI Governance Consulting
Risk assessment is one component of comprehensive AI governance. Build frameworks that manage risk systematically across all AI deployments.
Learn more →Privacy Act 2020 Compliance
Privacy risk is a critical component of AI risk. Ensure your systems comply with the 13 Privacy Principles and automated decision-making requirements.
Learn more →ISO 42001 Certification
ISO 42001 requires systematic AI risk assessment. Our methodology aligns with the standard's risk requirements.
Learn more →Ready to understand your AI risks?
Schedule a consultation to discuss your AI system and receive a comprehensive risk assessment that identifies what could go wrong before it does.