Getting the EU AI Act Done: What’s Left to Do?

The upcoming political trilogue meeting will be a crucial milestone for the EU AI Act and could be decisive to reach an agreement on the EU’s landmark AI legislation. EU policymakers are currently grappling with some of the most contentious issues in the Act, including which uses of AI will be banned and the rules for general-purpose AI (GPAI) and foundation models—the building blocks for AI systems.

The stakes for the European Union are huge: for the EU digital economy to remain competitive, it is imperative that any compromise EU negotiators reach reflects two key principles. First, the Act should lay out rules that provide legal clarity for companies and are flexible enough to stand the test of time. Second, the regulation should remain proportionate and risk-based to limit any unnecessary impact on innovation and AI adoption.

Building on ITI’s recommendations ahead of September’s trilogues, here are four considerations lawmakers should address while seeking to clinch a final deal.

1. A Two-Tier Approach for Foundation Models or General-Purpose AI Would Create Significant Legal Uncertainty

Policymakers have recently made proposals to introduce a two-tier approach to regulation of foundation models and general-purpose AI, which would include stronger obligations for certain more powerful AI models that are designated as ‘highly capable’ or ‘high impact’ and carry ‘systemic risks.’ While it’s unclear which models would fall in this category, these would be defined based on specific criteria, including, as proposed, the amount of compute used for their training, or the ‘frontier capabilities’ they exhibit.

Proposals for a two-tier approach are concerning. Crafting a definition of ‘highly capable models’ that is suitable for regulation is extremely difficult, especially in light of ongoing international dialogue on the topic. Based on present definitions and proposed criteria, it is not clear what specific risks policymakers hope to address. It is crucial that any proposal on foundation models clearly delineates the risks it is meant to mitigate. For example, the amount of compute that goes into training a model is not per-se a meaningful measure of risk because models that have been trained with less computing power may be as well used in risky contexts. At the same time, frontier capabilities are a moving target: as technology evolves, so would the notion of what is considered ‘highly capable.’ Any such definition would have to be continuously updated as AI developments advance. Tiered requirements would thus introduce significant uncertainty in the market for companies that must plan compliance accordingly and hamper AI innovation in Europe.

Instead of the proposed two-tiered approach, the EU AI Act should remain aligned with existing international efforts, such as the G7 Hiroshima AI Process, that are attempting to foster common definitions and address and identify unique frontier risks in certain powerful foundation models. These instruments offer appropriate flexibility for companies and global public authorities to meaningfully cooperate and address emerging risks.

2. Obligations for Foundation Models or General-Purpose AI Need to Remain Proportionate

Requirements that apply to developers of foundation models or general-purpose AI should be targeted and realistic. First, policymakers should aim to increase transparency across the AI ecosystem. Developers of foundation models should share information about their models’ capabilities and limitations with downstream entities who use the model to develop specific high-risk applications of AI. This will allow these entities to comply with potential regulatory obligations.

Second, any risk management requirement should remain targeted and limited to what can be reasonably addressed during design and development of a foundation model. Requirements should focus on overarching outcomes and should not be prescriptive. Maintaining a flexible approach, including through innovative governance arrangements, regulatory dialogue, and cooperation with industry will allow the framework to keep up with the pace of the AI safety landscape, and integrate metrics, tools, and best practices currently in development, all while allowing for innovation in this space.

Finally, policymakers should uphold the original risk-based approach of the EU AI Act. Foundation models or general-purpose AI should not be treated as high-risk AI and requirements should not apply when developers exclude use of their model or system in high-risk applications.

3. New Copyright Provisions Would Be Duplicative

Policymakers are considering the introduction of certain obligations related to the use of data protected by copyright for training AI systems, including public disclosure of summaries of the data used for training. These proposals are concerning, and we urge against this approach. The European Union already has a robust legal framework on copyright. The 2019 Copyright Directive regulates text and data mining and provides an option for copyright holders to opt-out from it. There are also long-established IP enforcement mechanisms via which a court order can be obtained to compel alleged infringers to disclose relevant information. Introducing new copyright-related requirements in the AI Act would be unnecessary, especially without an impact assessment and without having assessed the appropriateness of the existing legal framework.

4. A Full Ban of Biometric Categorization Would Have Unintended Consequences

The European Parliament proposes a full ban on biometric categorization based on sensitive or protected characteristics. This means that any AI system which categorizes people based on characteristics like gender, sex, physical ability etc. would be fully banned, regardless of how it is used. This ban would have unintended consequences. Biometric categorization is not intended to identify people and it is widely used for powering certain beneficial use cases. For example:

  • Accessibility: Biometric categorization is used to describe a person’s surroundings in AI-powered accessibility apps, for example for people with visual impairments. Categorization based on sensitive data is used in this case to recognize a friend or describe a person’s interlocutor.
  • Augmented Reality: Augmented Reality applications rely on categorization to understand and map a person’s surroundings. For example, virtual try-on applications allow a person to virtually try a product on (e.g., makeup) when shopping online. Categorization is used to understand a person’s features (including sensitive ones like skin color) and adapt the virtual try-on experience to that person’s characteristics.
  • Bias and fairness testing: Companies use biometric categorization during testing to determine if there is harmful bias towards protected characteristics and mitigate it.

A full ban of these use cases would be disproportionate and unnecessary given the GDPR currently regulates the processing of certain sensitive data under strict conditions and with adequate safeguards in place. Policymakers should maintain coherence with GDPR and avoid a full ban. Instead of a full ban, policymakers could include biometric categorization systems processing sensitive attributes as defined by GDPR in the list of high-risk AI systems.

Public Policy Tags: Artificial Intelligence

Related