10x EU AI Act vs US AI EO
Balancing Acts: Navigating the Diverse Landscapes of the EU AI Act and US AI Executive Order
The European Union's AI Act and the United States' AI Executive Order represent two landmark approaches to governing the burgeoning field of Artificial Intelligence, each reflecting the distinct policy landscapes and priorities of their respective regions.
EU AI Act in a Nutshell
First Comprehensive AI Law:
Proposed in April 2021, the EU AI Act is considered the first significant legal framework specifically targeting AI technologies.
Unacceptable Risk: Bans AI systems with extreme risks, such as manipulative practices or mass surveillance.
High Risk: Includes AI in critical sectors like healthcare and law enforcement. These systems must undergo rigorous assessment for safety and compliance.
Limited Risk: Requires AI systems to disclose when they are used in interactions, focusing on minimal transparency.
Generative AI: Systems like ChatGPT must comply with transparency rules, especially in disclosing AI-generated content and preventing illegal content generation.
Safety and Transparency Focus:
The Act emphasizes that AI systems must be safe, transparent, traceable, and overseen by humans to avoid discrimination.
Pending Final Form:
As of mid-2023, the EU Parliament has adopted its position, and the final law is expected by the end of the year.
US AI Executive Order in a Nutshell
(For in-depth commentary, refer to this blog)
AI Safety and Security Standards:
The EO establishes standards to protect against risks such as AI-enabled fraud and misuse of AI in biological materials.
Prioritizes development of privacy-preserving AI techniques and assesses federal AI practices concerning privacy.
Equity and Civil Rights:
Aims to prevent AI from exacerbating discrimination, focusing on justice, healthcare, and housing sectors.
Consumer and Worker Protection:
Addresses AI's potential harms in healthcare and education and develops principles to mitigate AI's impact on the workforce.
Innovation and Competition Emphasis:
Encourages AI research, supports small AI developers, and expands visa criteria for AI experts to maintain a competitive AI ecosystem.
Global Leadership and Collaboration:
Focuses on safe and responsible AI use internationally and aims to establish global AI standards.
Government Use of AI:
Directs federal agencies to use AI responsibly and efficiently, improving AI procurement processes and deploying AI professionals.
Both the EU AI Act and the US AI EO reflect their regions' approach to AI regulation and development, with the EU taking a more detailed regulatory stance (similar to GDPR) and the US focusing on fostering innovation, privacy, and global leadership in AI.
The comparison between the EU's Artificial Intelligence (AI) Act and the US AI Executive Order (EO) reveals several key differences and similarities:
EU AI Act: Aims to establish a unified regulatory framework across the entire EU, directly applicable in all member states. It focuses on creating a single regulation that addresses AI comprehensively.
US AI EO: Relies on the powers of the Presidency to direct executive departments to develop industry standards and regulations for AI usage. This approach may lead to divergent standards across different sectors.
Nature of Regulations:
EU AI Act: Enforces binding regulations, with violations incurring fines and penalties without further legislative action. It adopts a use-case based approach, identifying prohibited and high-risk AI use cases.
US AI EO: Focuses more on developing standards and guidelines rather than binding regulations. The ultimate effect under both the EO and the AI Act could be similar in practice.
Foundation Models and Dual-Use AI:
EU AI Act: Sets controls on the deployment of foundation models, such as large language models or image generation AIs. It has recently moved to more closely regulate very capable foundation models.
US AI EO: Pays special attention to dual-use foundation models that pose risks to security and public safety. It includes requirements for red-teaming by AI foundation model developers and submission of findings to the government.
Open Source Models
EU AI Act: The final version of the EU AI Act appears to exempt AI systems that are open source and do not present systemic risk. This exemption applies unless the system is commercially available or put into service and is a defined "high-risk" system under the Act.
US AI Executive Order: The EO does not specifically mention open source models or carve out exemptions for open source projects creating foundational models. This lack of explicit mention leaves room for interpretation and uncertainty regarding the treatment of open source AI models under the EO.
Testing and Monitoring:
Both the AI Act and the EO emphasize system testing and monitoring throughout an AI system’s lifecycle, including pre-market testing and post-market monitoring policies.
Both documents focus on individual privacy protection. The AI Act leverages the existing GDPR framework, while the EO necessitates the formulation of a relevant regime in the US, in the absence of nationwide privacy legislation.
Both mandate adherence to cybersecurity standards. The EO uniquely focuses on preventing the exploitation of AI models by international malicious cyber entities.
Government Involvement and Intellectual Property:
EU AI Act: Lacks specific initiatives on government investment in AI testbeds, which are covered in other EU laws.
US AI EO: Directs government agencies to establish AI testbeds for testing AI tools and technologies. It also touches on intellectual property issues related to AI, advocating for clarifying patent and copyright law boundaries.
Broader Political Dimensions:
The EO also addresses broader political issues like immigration, education, housing, and labor, which are not explicitly covered by the AI Act.
Global Compliance Strategy:
Businesses operating globally might find substantial overlap in compliance efforts to meet both the AI Act and EO's requirements. The EU leans towards a more formal compliance demonstration, while the US approach may suffice with aligning activities to industry standards.
US AI Executive Order
Scholarly Reactions: Scholars have given initial reactions focusing on various aspects of the Executive Order, such as non-industry led AI research, inadequate safeguards for workers' rights, and the tension between competition and innovation.
Comprehensive but Limited: Experts acknowledge that while the Executive Order is comprehensive and significant in setting priorities and principles for AI, it is limited by the existing authorities and appropriations of the executive branch agencies. The need for bipartisan recognition and legislation for a balanced approach to AI is emphasized.
Sectoral Approach to AI Governance: The Order is seen as more than just posturing, taking a significant step in advancing a sectoral approach to AI governance and highlighting the importance of data privacy and legislative action.
Focus on AI Ethics and International Engagement: The Order's emphasis on AI ethics and international engagement is appreciated, signaling the US's intent to lead the global conversation on AI ethics by example.
Aggressive but Necessary: Despite the likelihood of encountering hurdles and court challenges, the Executive Order is viewed as necessary to balance AI innovation with responsible use, emphasizing safety, privacy, equity, and consumer protection.
Potential Vaporware: Some experts criticize the Order as being akin to vaporware in software, suggesting it's a long way from actual implementation. The Order is vast in scope but may not materialize as presented.
Comprehensive Vision for AI: The Order is recognized for its comprehensive vision in ensuring responsible development of AI systems, addressing safety, security, equity, privacy, and the recruitment and retention of AI talent.
EU AI Act
Mixed Reactions from European Tech Firms: European technology firms have given a mixed reaction to the EU’s AI Act. While there is general approval of the tiered risk-based approach and regulation of foundation models, there is still uncertainty about what systems will count as high-risk.
Concerns About Compliance Tasks: There is fear that compliance tasks for high-risk systems may fall to developers, potentially making European tech companies less attractive workplaces.
Resource Allocation Concerns: The new requirements of the AI Act are seen as potentially diverting resources from hiring AI engineers to legal compliance, affecting innovation capacity.
In summary, the US AI Executive Order has garnered generally positive reactions, albeit with some concerns about its scope, implementation, and the limitations inherent in executive actions. Conversely, the EU AI Act has elicited mixed responses from European tech firms who appreciate its approach but are apprehensive about regulatory clarity. This contrasts with the broader regulatory landscapes, where the EU AI Act creates a stringent, unified framework, and the US AI EO promotes diverse sector-specific standards and guidelines. Both, however, emphasize crucial aspects such as privacy, cybersecurity, and the necessity for thorough AI system testing and monitoring. I continue to believe we’re in very early stages of this process, and it’s great to see Government entities taking active interest - this will continue to be an open dialog as AI implementations permeate through wider societal landscape.