What counts as “high‑risk AI” in Australia in 2026

High Risk AI

Australia is pivoting to a risk‑based approach to AI in 2026 – tightening guardrails where harm could be serious or irreversible while keeping low‑risk innovation unimpeded. In late 2024 the Commonwealth released a Proposals Paper to define “high‑risk AI” using principles and to mandate lifecycle guardrails focused on testing, transparency and accountability, with consultation closing 4 October 2024. A Senate Committee later recommended dedicated, economy‑wide legislation for high‑risk uses, backed by a principles‑based definition and a non‑exhaustive list (explicitly including general‑purpose AI).

bullseye - what is high-risk AI in 2026

This post distils what to treat as high‑risk AI in Australia right now, and sets it against the European Union (EU), United Kingdom (UK), United States (US), Canada, and Singapore so you can align your governance and assurance to international expectations.

At Andymus Consulting we recognised that understanding the risk, and particular high-risk AI activities is important in developing trust with your clients and other stakeholders. Fee free to contact us to discuss any assistance you may need in this area.


why high-risk AI matters in 2026

Why this matters now

  • The Government’s January 2024 interim response committed to guardrails for AI in legitimate but high‑risk settings, prioritising ex‑ante prevention via testing, transparency and accountability. [industry.gov.au]
  • The September 2024 Proposals Paper sketches how to define high‑risk AI and apply 10 guardrails across the supply chain—either via sectoral amendments, a framework law, or a cross‑economy AI Act.
  • The Nov 2024 Senate report recommends a dedicated Act, a principles‑based high‑risk definition backed by an illustrative list of uses, and explicit coverage of GPAI.

map of australia - why why high-risk AI matters in 2026 in Australia

The Australian definition: what to treat as high‑risk AI

Australia’s direction is use‑based: an AI system is high‑risk when its intended or foreseeable use could materially affect safety, human/worker rights, access to essential services, health outcomes, public benefits, or critical infrastructure, including general‑purpose AI embedded in those settings.

Common high‑risk settings to flag in your portfolio:

What the guardrails will likely require: documented risk & impact assessment, pre‑deployment testing and in‑life monitoring, meaningful human oversight, and clear accountability across the developer → deployer chain.


What’s already actionable in Australia

  • National framework for AI assurance in government (June 2024)
    implements the Australian AI Ethics Principles in government deployments; a practical reference for governance, transparency and accountability even outside the public sector.
  • NSW AI Assessment Framework (2024)
    mandatory for NSW Government, with structured risk self‑assessment and escalation of high‑risk systems to the AI Review Committee; a strong process model if you need a concrete definition‑by‑assurance.
  • OAIC guidance (Oct 2024 / updated Jan 2025)
    requires PIAs for high‑privacy‑risk uses, cautions against entering personal/sensitive data into public GenAI, and stresses transparency where AI outputs affect individuals.
  • APRA lens (May 2024)
    no AI‑specific prudential rulebook (for now); entities must manage AI risks under technology‑neutral standards (e.g., CPS 234, CPS 230) with human accountability and robust oversight. [insurancenews.com.au], [brokerdaily.au]

photo of outer space

Global benchmarks for “high‑risk AI”

European Union Flag

European Union — legal list of high‑risk

The EU AI Act classifies high‑risk via two routes:

  1. AI that is a safety component of regulated products, and
  2. AI used in Annex III contexts—biometrics, critical infrastructure, education, employment/HR, essential services & benefits, law enforcement, migration/asylum, justice & democratic processes.

These systems face stringent obligations (risk management, data governance, logging, transparency, human oversight, robustness, post‑market monitoring). [eur-lex.europa.eu], [bundesnetzagentur.de]

United Kingdom Flag

United Kingdom — context‑based, regulator‑led

The UK applies five cross‑cutting AI principles (safety, transparency, fairness, accountability, contestability) via sector regulators rather than a single statute, with a central function supporting risk assessment; government is exploring binding requirements for highly capable GPAI. [gov.uk], [cdp.cooley.com]

United States Flag

United States (Federal use) — rights‑/safety‑impacting

OMB M‑24‑10 compels agencies to identify rights‑impacting and safety‑impacting AI uses and implement minimum practices (or stop using them), creating a clear high‑risk category for government AI. In industry, the NIST AI Risk Management Framework (plus a Generative AI Profile, 2024) is the de‑facto baseline for assessing and mitigating high‑risk AI. [whitehouse.gov], [crowell.com] [nist.gov], [data.aclum.org]

Canadian Flag

Canada — high‑impact (status update)

Canada’s proposed AIDA would have regulated high‑impact systems (akin to high‑risk) in areas like employment, essential services, biometrics and credit‑type determinations, but Bill C‑27 died on the Order Paper on 6 Jan 2025 following prorogation; any federal regime will need re‑introduction. [fasken.com], [gowlingwlg.com]

Singaporean Flag

Singapore — model governance and GenAI focus

Singapore favours standards and testing over statutes: the Model AI Governance Framework and the 2024 Generative AI Framework detail practical controls—accountability, testing/assurance, content provenance, incident reporting—that organisations scale in higher‑risk contexts; IMDA maintains a crosswalk to NIST for interoperability. [imda.gov.sg], [aiverifyfo…ndation.sg]


Side‑by‑side: comparing high‑risk across jurisdictions

Risk areaAustralia (proposed)EU AI ActUKUS
Critical infrastructure / safety componentsHigh‑risk settings with mandatory guardrails in development (testing, transparency, accountability). [consultati…tga.gov.au]High‑risk via Annex II/III; strict obligations. [eur-lex.europa.eu], [bundesnetzagentur.de]Risk judged in context by sector regulators. [gov.uk]Safety‑impacting AI; minimum practices required. [whitehouse.gov]
Jobs, credit, education, benefits, essential servicesHigh‑risk settings under proposed principles. [consultati…tga.gov.au]High‑risk (Annex III). [bundesnetzagentur.de]Regulator‑led, context‑based. [gov.uk]Rights‑impacting AI; minimum practices. [whitehouse.gov]
Biometrics (RBI, categorisation, emotion)High‑risk setting. [consultati…tga.gov.au]High‑risk (Annex III). [bundesnetzagentur.de]Regulator‑led, context‑based. [gov.uk]Often rights/safety‑impacting. [whitehouse.gov]
Law enforcement / migrationAnticipated high‑risk. [consultati…tga.gov.au]High‑risk (Annex III). [bundesnetzagentur.de]Regulator‑led, context‑based. [gov.uk]Typically rights/safety‑impacting. [whitehouse.gov]
GPAI in high‑risk contextsExplicitly considered for guardrails. [consultati…tga.gov.au]Addressed via GPAI/systemic‑risk provisions. [eur-lex.europa.eu]Exploring binding requirements. [cdp.cooley.com]Covered via use‑based procurement & risk. [whitehouse.gov]

Build once, comply many: standards that travel

  • ISO/IEC 42001:2023 (AI Management System) — the world’s first AIMS standard; operationalises governance (policy/roles), AI risk & impact assessment, lifecycle controls and continual improvement. Certification helps evidence readiness for higher‑risk deployments.
  • ISO/IEC 23894:2023 (AI Risk Management) — lifecycle guidance to identify, analyse and treat AI‑specific risks; complements ISO 42001 and maps well to NIST AI RMF.
  • NIST AI RMF (2023) + Generative AI Profile (2024) — widely adopted US framework; a strong reference model for categorising and mitigating high‑risk characteristics.

A practical action plan for Australian organisations

  1. Triage your use‑cases
    Classify any AI that touches critical infrastructure, safety, or material rights/outcomes (health, HR, credit, benefits, education, justice) as high‑risk by default and plan for stronger assurance. [Safe and responsible AI in Australia: Proposals paper for introducing mandatory guardrails for AI in high-risk settings]
  2. Adopt assurance‑first governance
    Institute AI risk & impact assessments, pre‑deployment testing, drift monitoring, and human‑in‑the‑loop oversight across high‑risk systems; align your management system to ISO/IEC 42001 and your risk controls to ISO/IEC 23894 / NIST AI RMF. [iso.org], [iso.org], [nist.gov]
  3. Privacy by design for AI
    When AI collects, infers or generates personal information, conduct PIAs, minimise data, and maintain clear user notifications; avoid entering personal/sensitive data into public GenAI. [oaic.gov.au]
  4. Prepare for export markets
    If you sell into the EU, assume Annex III where applicable and build EU‑grade documentation and testing now to de‑risk CE‑style obligations. [eur-lex.europa.eu], [bundesnetzagentur.de]
  5. Leverage government exemplars
    Use the National AI assurance framework and NSW AIAF as templates for escalation triggers, documentation, and independent review pathways. [finance.gov.au], [digital.nsw.gov.au]

Closing thought

Australia’s line of march is clear: focus guardrails where the stakes are high and keep room for low‑risk innovation. If you treat the contexts above as high‑risk now and align your program to ISO 42001 / ISO 23894 / NIST, you’ll be ready for the Australian regime and interoperable with the EU, UK and US expectations when they knock on your door. [industry.gov.au] [iso.org], [iso.org], [nist.gov]

At Andymus Consulting we are able to assist with your needs in this area. Please contact us to discuss your requirements.


Panoramic view of a grand circular library with shelves full of books and study desks.

References & further reading

Comments are closed