Five Recommendations for the EU AI Act Trilogue Negotiations

The EU AI Act negotiations are back in full swing as EU legislators prepare for the next political trilogue on 2 and 3 October. The European Commission, Council of the EU, and European Parliament are expected to discuss the key provisions concerning the designation of the high-risk AI use cases which will be subject to the Act and proposed requirements for foundation models and general-purpose AI systems.

As first movers in AI governance globally, EU policymakers have the crucial responsibility to deliver a future-proof legislative framework that increases trust in this key technology. The final AI Act should be a clear, legal framework that helps companies in Europe harness the potential of AI and bring transformative products to the market. It should not overly burden innovators with disproportionate requirements that are not motivated by clear and specific risks.

The upcoming October trilogue is an opportunity to deliver additional clarifications and craft a balanced, proportionate and pro-innovation regulatory framework. Here are five ways policymakers can achieve that goal:

  1. Establish clear and practical rules for the classification of high-risk AI systems

The original Commission proposal identified a broad list of high-risk AI use cases, which made the scope of the regulation vague. For instance, the inclusion of AI used in recruitment potentially applied to all uses of AI in this field, including riskier uses such as evaluating candidates, but also lower risk uses like placing any type of job ads. This would be disproportionate. We welcomed both the Council and the Parliament’s updated proposals to better define and tailor the application of high-risk AI systems.

Under both proposals, companies would assess if their AI use case mentioned on the list, such as certain uses in employment or management of critical infrastructure, meets specific risk criteria and comply with the EU AI Act only if that is the case. This is the right approach, but clarification is still required. For example, the Parliament’s proposal to notify public authorities of the result of a company’s self-assessment is disproportionate, it would overburden companies and authorities and risk fragmenting the single market due to diverging interpretations from national authorities.

Companies should be able to self-assess against clear criteria whether their system falls in the high-risk category, and directly release their product in the market if it is not high risk. The assessment could consider risks to fundamental rights and safety, the degree of human supervision or the type of output delivered by AI. If necessary, relevant documentation about this assessment could be provided to the authorities. Regulators should also offer guidance to companies about how to properly perform this assessment.

  1. Craft a Targeted List of High-Risk AI Systems and Avoid Overlap with other Regulations

In addition to establishing clear and practical rules for the classification of high-risk AI systems, the list of uses itself needs to be as precise and targeted as possible. A broad list of high-risk use cases would create legal uncertainty for companies and increase regulatory risks. For example, the Parliament’s text would include AI systems used to make inferences based on biometrics data which could bring in scope a variety of ancillary uses of AI such as AR tools in photo applications or filters used to improve quality in video calls.

Legislators should also avoid duplication with existing or upcoming legislation such as the DSA or the regulation on Political Advertisement. Issues related to AI used in elections or recommender systems used by social media platforms are already tackled in these frameworks, which would be undermined by overlapping requirements under the EU AI Act.

  1. Ensure Relevant Information is Shared Along the AI Value Chain

General Purpose AI (GPAI) systems and foundation models do not have a specific intended purpose and can be adapted to a variety of low or high risk use cases: they can be used in systems capable of image captioning or object recognition or question answering.

Therefore, it is key to make sure whoever uses GPAI or foundation models to build high-risk AI systems has the right information to comply with the AI Act. As proposed by Council and Parliament, we support a duty to share this information for providers of GPAI and foundation models. This will support holistic risk-management across the value chain by appropriately considering the capabilities and responsibilities of different actors.

  1. Ensure Requirements for Developers of Foundation Models are Realistic

Foundation models can be used for myriads of downstream tasks - from chatbots to healthcare or education. While developers of foundation models can take certain measures prior to release to increase safety and security and mitigate risks stemming from their application, foreseeing risks arising from every use is not realistic at the development stage. Effective risk management is highly dependent on context of deployment or use. Requirements for foundation model developers should thus be outcome-oriented, proportionate, practicable and only extend to what developers can reasonably address during design and development.

Several requirements proposed by the Parliament are not realistic and take an overly prescriptive approach. These include the requirement to identifyreasonably foreseeable risks’ or to reach specific performance levels. A prescriptive approach here would undermine ongoing research around controls and guardrails for foundation models and risk stifling innovation.

  1. The Risk-Based Approach Should Also Apply to Foundation Models

The AI Act should not regulate an entire technology, but rather its use, in line with the risk-based approach. Foundation models that are not intended for use in high-risk applications - as long as that is explicitly stated in the terms of service, documentation or instructions of use - should thus not be targeted by specific requirements.

This clarification would also better recognize the diversity among foundation models, for example those that are not made available to the public or only released for business-to-business enterprise uses. This is rightly recognized by the Council in its GPAI provisions, and we strongly urge legislators to support it in the final text.

Public Policy Tags: Artificial Intelligence

Related