Global AI Standards & Requirements in 2026

Artificial intelligence (AI) standards and regulation has shifted from ideas to implementation. The EU AI Act is phasing in across 2025–2027. The US is operationalizing the NIST AI Risk Management Framework and a Generative AI Profile. Australia is moving toward mandatory “guardrails” for high‑risk AI while uplifting privacy and public‑sector controls. Singapore, Japan, China, Canada and the UK have each advanced distinct – but increasingly interoperable approaches including international standards.

This post curates what’s binding vs. voluntary, timelines, key standards, and the practical controls organisations should adopt now. If you would like advice or support for this Andymus Consulting can assist.

Why it matters.

Whether you build foundation models or deploy AI in finance, health, critical infrastructure or the public sector, you’ll need a common controls language that works across jurisdictions: think NIST AI RMF for risk, ISO/IEC 42001 for an AI management system, EU AI Act risk tiers for market access, and content provenance for AI‑generated media. The good news: there’s now enough convergence to act decisively.

Global baselines shaping national rules

OECD Logo

OECD AI Principles (2019; updated 2024) are the first intergovernmental standards for trustworthy, human‑centric AI (inclusion, human rights, transparency, robustness, accountability) and underpin many national frameworks.

UNESCO logo

UNESCO Recommendation on the Ethics of AI (2021) provides global normative guidance with actionable “Policy Action Areas” (data governance, environment, education, gender) adopted by all 193 member states.

Council of Europe Framework Convention on AI (2024) is the first legally binding treaty on AI, human rights, democracy and rule of law—open for signature beyond Europe and already attracting global signatories.

G7 Hiroshima 2023 Logo

G7 Hiroshima AI Process (2023‑24) issued International Guiding Principles and a Developer Code of Conduct for advanced models—non‑binding but now a reference point for governments and industry.


European Union Flag

European Union — the EU AI Act (Regulation (EU) 2024/1689)

What it is. The first comprehensive, horizontal AI law, in force since 1 August 2024, with staged application through 2025–2027 covering prohibitions, GPAI duties, governance, penalties, and high‑risk system obligations. [whitecase.com], [europarl.europa.eu]

Key dates (high level).

  • 2 Feb 2025: Prohibitions & AI literacy start.
  • 2 Aug 2025: GPAI/model rules, governance and penalties apply; notified bodies operational.
  • 2 Aug 2026: General application of most provisions.
  • 2027: Broad high‑risk requirements bite fully (with some transitions). [artificial…enceact.eu], [schoenherr.eu]

Why standards matter. Under Article 40, conforming to harmonised standards gives a legal presumption of conformity, so watch CEN/CENELEC and ISO/IEC deliverables mapped to EU requirements. [europarl.europa.eu]

What to do now. Perform risk tiering (prohibited / high‑risk / limited / minimal), build a risk, data & quality management system aligned to ISO/IEC 42001 (AIMS) and ISO/IEC 23894 (AI risk), and prepare technical documentation and post‑market monitoring. [iso.org], [iso.org]


United States Flag

United States — NIST‑led governance + federal policy

  • Executive Order 14110 (Oct 2023) directed a whole‑of‑government approach to safe, secure, trustworthy AI with assignments to NIST, DOE, DHS and others (e.g., testing, critical infrastructure, biosecurity, civil rights). [bidenwhite…chives.gov], [federalregister.gov]
  • NIST AI Risk Management Framework (AI RMF 1.0, 2023) is the de facto national baseline (Govern–Map–Measure–Manage), supported by an AI Resource Center and Playbook. [nist.gov]
  • NIST Generative AI Profile (NIST‑AI‑600‑1, July 2024) adds concrete measures for GenAI (e.g., evals/red‑teaming, misuse mitigation, data controls). [nist.gov]
  • OMB M‑24‑10 (Mar 2024) mandated Chief AI Officers, public inventories, and risk controls for rights‑ and safety‑impacting federal AI; subsequent 2025 memoranda under a new administration adjusted the approach to accelerate adoption while retaining core safeguards. [whitehouse.gov], [jdsupra.com]

Takeaway. For US‑market operations and global alignment, implement NIST AI RMF and the GenAI Profile as your operating control set, then crosswalk to ISO/IEC and local laws. [nist.gov]


Australian Flag

Australia — guardrails, privacy uplift, and public‑sector policy

  • Government interim response (Jan 17, 2024) signalled mandatory guardrails for high‑risk AI (testing, transparency, accountability) and launched a Voluntary AI Safety Standard as an immediate uplift; proposals consulted through late 2024. [industry.gov.au], [minterellison.com]
  • Policy for the responsible use of AI in government v2.0 (effective Dec 15, 2025) requires accountable officials, transparency statements, risk‑based use‑case assessments, registers, and staff training across non‑corporate Commonwealth entities. [digital.gov.au]
  • Privacy reforms (Privacy and Other Legislation Amendment Act 2024) introduced stronger enforcement, automated decision‑making transparency obligations, and a statutory tort for serious invasions of privacy (commenced 2025). [ashurst.com], [corrs.com.au]
  • Regulator coordination (DP‑REG) continues with working papers on LLMs and multimodal foundation models, pointing to competition/consumer, online safety and privacy risks—helpful signals for enterprise risk assessments. [accc.gov.au], [acma.gov.au]
  • OAIC GenAI guidance (Oct 2024) clarifies that publicly available data isn’t automatically fair game for training; treat sensitive information with heightened consent and risk controls. [oaic.gov.au]

Takeaway. Expect mandatory guardrails in high‑risk contexts; uplift your privacy, ADM transparency, and safety testing practices now to get ahead. [minterellison.com]


United Kingdom Flag

United Kingdom — principles‑based and regulator‑led

The UK’s “pro‑innovation” approach empowers existing regulators to apply five cross‑cutting principles (safety, transparency/explainability, fairness, accountability & governance, contestability & redress) rather than introducing an immediate horizontal AI law, with the AI Safety Institute deepening evaluations of frontier systems. [questions-…liament.uk], [kpmg.com]


Canadian Flag

Canada — federal law paused; governance via code(s) and instruments

After the Artificial Intelligence and Data Act (AIDA) within Bill C‑27 died on the order paper in Jan 2025, Canada relies on a Voluntary Code of Conduct for advanced GenAI and sector/treasury instruments while policymakers consider next steps. [mcinnescooper.com], [ised-isde.canada.ca]


Singaporean Flag

Singapore — practical governance + AI assurance

  • Model AI Governance Framework (GenAI) (final May 30, 2024) sets nine dimensions: accountability, data, trusted development & deployment, incident reporting, testing/assurance, security, content provenance, safety/alignment R\&D, AI for public good; developed by IMDA and AI Verify Foundation with NIST cross‑walks. [imda.gov.sg], [imda.gov.sg]

AI Verify Foundation provides open‑source evaluation tooling and a global assurance sandbox—useful if you need practical test assets for model/app assessments.


Japanese Flag

Japan — business‑ready guidance and G7 leadership

  • AI Guidelines for Business v1.0 (Apr 19, 2024) (METI & MIC) consolidate earlier guidance and set role‑based expectations for developers, providers, and users with principles for safety, transparency, privacy, fairness, and accountability. [meti.go.jp]
  • Japan also led the G7 Hiroshima AI Process, shaping Guiding Principles and the Developer Code of Conduct for advanced AI systems. [japan.go.jp]

Chinese Flag

China — algorithmic governance + deep synthesis + generative AI rules

  • Algorithmic Recommendation Provisions (effective Mar 1, 2022) mandate disclosure, opt‑out, protections for minors/elderly, anti‑manipulation and labeling obligations for algorithmic content.
  • Deep Synthesis (Deepfake) Provisions (effective Jan 10, 2023) require visible labeling of synthetic content, security assessments for sensitive features (e.g., facial/voice editing), and content controls.
  • Interim Measures for Generative AI Services (effective Aug 15, 2023) set duties for public‑facing GenAI services, including security assessments for services with “public opinion/social mobilization” attributes and algorithm filings.
  • AI Standardization Guidelines (2024 Edition) outline a plan to build a comprehensive national AI standards system by 2026 across seven domains including safety/governance (50+ new standards targeted). [wap.miit.gov.cn], [cspress.cn]

The standards stack you can implement now

National Institute of Standards and Technology

NIST AI RMF 1.0 + Generative AI Profile — lifecycle risk management and GenAI‑specific controls (evals, misuse mitigation, monitoring). Ideal as an operating control set across jurisdictions. [nist.gov]

ISO/IEC 42001 (AI Management System) — the management‑system backbone for policies, roles, documented processes and continual improvement (think “ISO 27001 for AI”). Pair with ISO/IEC 23894 for risk.

International Standards Organisation Logo
European Telecommunications Standards Institute Logo

ETSI TC SAI — TS 104 223 (2025)baseline cyber security requirements for AI models/systems across the lifecycle (13 principles → 72 provisions), plus supporting reports on traceability, testing, mitigations, data supply chain. Great for secure‑by‑design programmes.

C2PA (Content Credentials v2.x) — open standard for cryptographically verifiable provenance of images/video/audio; increasingly adopted by tools and platforms to identify AI‑generated or manipulated media.

Coalition for content provenance and authenticity

Contact Andymus Consulting to discuss your requirements for implementing AI standards and governance.

Comments are closed