The EU AI Act: What it Means for Global AI Policymaking

On June 14, the European Parliament voted to approve its text of the European Union’s Artificial Intelligence Act, which would levy obligations on developers of high-risk AI systems as well as developers of foundation models. The vote paved the way to the opening of negotiations with EU Member States – the so-called trilogues – which will determine the final text of the AI Act in the coming months.

Since it was first tabled in 2021, the AI Act has inspired countries around the world as they seek to determine their approaches to AI governance. Some are considering hard law approaches, similar to that being taken in the EU, while others are considering self-regulation, voluntary, or sectoral approaches. For example, in Latin America, the Brazilian Congress is considering a draft bill of law to regulate AI, and countries like Chile and Colombia have also introduced AI legislation. Canada is considering how to regulate AI as a part of their C-27 package. Conversations on AI governance are also happening in the United States, both within the Biden Administration, such as the National Telecommunications and Information Administration (NTIA) Accountability Policy, the Office of Science and Technology Policy’s National Priorities for AI Strategy, and in Congress, including through U.S. Senate Majority Leader Schumer, Representative Ted Lieu, and other Members. With all this activity unfolding, the EU AI Act will have an impact on the way other jurisdictions approach AI policy, potentially acting as the de facto standard for AI governance in the absence of other AI regulation.

This means that we are also at a pivotal moment for alignment. AI is borderless; its development, use, and impact transcend geographic boundaries. To avoid fragmentation of regulatory approaches, to ensure consistent safeguards, and to promote global adoption of AI, we must be having conversations around how to maximize interoperability of regulatory approaches to AI globally, recognizing that complete harmonization may not be a reasonable possibility.

With that in mind, we offer below some perspectives on the EU AI Act, including things that may be relevant for other jurisdictions to consider in thinking through approaches to AI regulation.

  • Regulation should target specific high-risk uses of AI, rather than AI itself. We are supportive of a risk-based approach to regulation. In general, we appreciate that the European Union proposes a risk-based approach that focuses on levying obligations on high-risk AI systems. Regulation should focus on AI uses that truly pose a risk to fundamental human rights or safety. Regulation should focus on specific uses of AI deployed in certain environments, as opposed to designating all uses of AI in one sector as high-risk. Importantly, a horizontal approach is not the only approach to regulation that may be appropriate.

  • Avoid classifying all foundation models as high-risk. The European Parliament’s text notably proposes new rules for foundation models, targeting models trained on broad data at scale” which can be adapted to a wide range of tasks with different risk factors. Given their broad applicability, all foundation models should not be subject to requirements designed for high-risk systems. Only where they are implemented in a high-risk use case should they be subject to the requirements of the Act.

  • Enable transparency across the AI ecosystem. At the same time, because foundation models can be integrated into a variety of downstream applications, there may be additional measures that are appropriate for developers of foundation models to undertake. For example, information-sharing and disclosure across the AI ecosystem will help to enable compliance for deployers who implement the model in a high-risk use case, contributing to a fair allocation of responsibility and enabling holistic risk management across the value chainfrom producers to providers to customers and beyond.

  • Ensure that regulation is based on international standards. Truly international standards are crucial to foster innovation and are required to facilitate interoperability across borders; region-specific technical standards will serve to erect trade barriers and fragment the market. For this reason, it is fundamental that policymakers globally leverage international standards to demonstrate compliance with regulatory requirements.

  • Build an agile and forward-looking enforcement system. It’s important that the AI Act’s enforcement is coordinated with the enforcement of other pieces of digital legislation, including the Digital Services Act, Digital Markets Act, General Data Protection Regulation, Cyber Resiliency Act, and the NIS2 Directive. Enforcement authorities must be equipped with a diverse array of expertise. A complex and non-coordinated enforcement model risks leading to differing interpretations of the provisions of the AI Act, creating unnecessary legal complexity that could hamper the development of a robust AI ecosystem in Europe. Enforcement is also relevant to countries that are thinking about AI regulation more broadly. In our view, a domain-specific approach is more appropriate, where existing agencies leverage existing authorities to address risks related to AI. In addition, coordination of efforts across sectors or domain-specific regulators will be needed. For example, in the United States, the National AI Initiative Office could play a helpful coordinating role, while the National Institute of Standards and Technology could provide AI-specific expertise for agencies that need additional support.

For additional policy recommendations, we encourage readers to review our recent submission to NTIA on AI Accountability Policy, as well as our Global AI Policy Recommendations.

Public Policy Tags: Artificial Intelligence

Related