top of page

Global AI Law & Standards Index

A curated overview of the evolving legal, regulatory, and technical landscape shaping ethical AI.

From the EU AI Act and GDPR to national standards and sector-specific guidelines, this section offers a structured reference point for policymakers, researchers, and practitioners working to align AI systems with human values, legal obligations, and global best practices.

 

 Understanding the regulatory landscape isn’t optional, it’s foundational to building AI systems that are lawful, trustworthy, and sustainable.

1

EU Artificial Intelligence Act (AI Act)

Overview: The EU AI Act is the world's first comprehensive AI regulation, classifying AI systems by risk levels, unacceptable, high, limited, and minimal. It mandates transparency, accountability, and human oversight, especially for high-risk applications. Key Provisions: Bans AI systems posing unacceptable risks. Imposes strict requirements on high-risk AI systems. Establishes a European Artificial Intelligence Board for oversight.

2

OECD AI Principles

Overview: Adopted in 2019 and updated in 2024, these principles promote the responsible stewardship of trustworthy AI, emphasizing human rights and democratic values. Core Principles: Inclusive growth and sustainable development. Human-centered values and fairness. Transparency and explainability. Robustness, security, and safety. Accountability.

3

UNESCO Recommendation on the Ethics of Artificial Intelligence

Overview: Adopted in 2021, this is the first global standard on AI ethics, applicable to all 194 UNESCO member states. It emphasizes human rights, dignity, and environmental sustainability. Key Areas: Transparency and explainability. Fairness and non-discrimination. Human oversight and accountability. Data privacy and protection.

4

Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law

Overview: Adopted in September 2024, this is the first international legally binding treaty on AI, aiming to ensure that AI technologies align with fundamental human rights, democratic values, and the rule of law. Key Provisions: Mandates risk and impact assessments to mitigate potential harms. Provides safeguards such as the right to challenge AI-driven decisions. Applies to public authorities and private entities acting on their behalf. Signatories: Includes the EU, UK, US, Canada, and other nations.

5

AI Bill of Rights (Blueprint)

Overview: Released by the White House Office of Science and Technology Policy in October 2022, this non-binding framework outlines five principles to guide the design, use, and deployment of automated systems. Key Principles: Safe and effective systems. Algorithmic discrimination protections. Data privacy. Notice and explanation. Human alternatives, consideration, and fallback.

6

Federal Trade Commission (FTC) AI Guidance

Overview: The FTC has issued guidance emphasizing the legal and ethical risk mitigation in deploying AI, suggesting companies provide transparent and unbiased AI-driven solutions. Key Recommendations: Avoiding deceptive or unfair practices. Ensuring transparency and accountability. Mitigating algorithmic bias.

7

Japan’s Social Principles of Human-Centric AI

Overview: Established in 2019, Japan's principles aim to ensure that AI is developed and used in a manner that respects human dignity and supports a sustainable society. Key Focus Areas: Human-centered values. Fairness and transparency. Collaboration between stakeholders.

8

China:  Regulations on the Administration of Algorithmic Recommendation Services (2022)

Regulations on the Administration of Algorithmic Recommendation Services (2022)

9

China:Provisions on the Administration of Deep Synthesis Internet Information Services (2023)

Issued by: Cyberspace Administration of China Scope: Targets services that use deep synthesis technologies (e.g., deepfakes, synthetic audio/video) to generate or alter content. Key Requirements: Labelling: All AI-generated or altered content must be clearly labeled to avoid misleading the public. Consent: Deep synthesis services must not create synthetic representations of real individuals without their explicit consent. Abuse Prevention: Providers must take measures to prevent the use of synthetic media for fraud, impersonation, or destabilization. Security Reviews: Algorithms that could influence public opinion or national security must undergo security assessments and registration. Traceability: Technical mechanisms must be in place to track the source and generation process of synthetic content.

10

Brazil’s National
Artificial Intelligence Strategy,
EBIA (Estratégia Brasileira de
Inteligência Artificial)

Key Objectives of EBIA Promote Responsible AI Research and Innovation: Encouraging the development of AI technologies that align with ethical principles and human rights. Ensure Data Protection and Privacy: Aligning AI applications with Brazil’s General Data Protection Law (LGPD) to safeguard personal data. Foster International Cooperation: Engaging in global discussions and partnerships to promote best practices in AI development and governance. Enhance Education and Workforce Training: Developing programs to prepare the workforce for the AI-driven digital economy. Apply AI in Public and Productive Sectors: Implementing AI solutions to improve public services and boost productivity in various industries. The strategy is structured around nine thematic axes, including legislation and ethical use, AI governance, international aspects, education, workforce training, research and innovation, application in productive sectors, application in the public sector, and public security. These axes are supported by 73 strategic actions designed to operationalize the strategy's objectives

11

Global AI Law and Policy Tracker

Overview: The Global AI Law and Policy Tracker is a dynamic, regularly updated resource designed to monitor and document AI-related legislation, regulatory proposals, and policy frameworks around the world. It helps governments, companies, researchers, and the public stay informed about the evolving legal landscape for artificial intelligence. Jurisdictional Updates: Covers AI policy developments in countries across North America, Europe, Asia-Pacific, Latin America, and Africa. Legislation Status: Tracks whether regulations are proposed, under review, or enacted. Key Themes: Risk-based regulation (e.g., EU AI Act) Privacy and data protection Algorithmic accountability Transparency and human oversight National AI strategies and ethical frameworks Global Comparisons: Enables side-by-side insights into how AI is being governed differently across nations and regions. Policy Trends: Highlights emerging global norms, such as alignment with OECD and UNESCO AI principles.

12

HIPAA (Health Insurance Portability and Accountability Act)

HITECH Act

FDA Medical Device Guidelines (Including Software as a Medical Device – SaMD)

13

GDPR (General Data Protection Regulation)

MED
Medical Dev. Regulation
  EU 2017/745)

14

Canada – PIPEDA

Brazil – LGPD
(Lei Geral de Proteção
de Dados)

Australia – Privacy Act 1988

15

ISO/IEC 27001

ISO 13485 – Medical Devices QMS

NIST Cybersecurity Framework

16

HL7 (Health Level Seven)

FHIR (Fast Healthcare Interoperability Resources)

ICD-10 – WHO

17

Digital Operational Resilience Act (DORA)

Fair Credit Reporting Act (FCRA)

Equal Credit
Opportunity Act (ECOA)

18

ISO/IEC 27005:2022 – Information Security Risk Management

19

Basel Committee – AI in Financial Services

Fair Credit Reporting
Act (FCRA)

25

swiss Federal Data Protection and Information Commissioner

Contact Information

ijvourganas(at)netrity(dot)co(dot)uk

jvourganas(at)teemail(dot)gr

linkedin-2815918_1280.jpg

Thanks for submitting!

© Copyright
bottom of page