Five Recommendations for a Forward-Looking and Innovation Friendly AI Act

Following the much-anticipated publication of the Artificial Intelligence (AI) Act proposal by the European Commission earlier this year, EU policymakers in the Council and the European Parliament are beginning their work on the text. While the Council is already looking at the key substantive issues of the proposal, the European Parliament is expected to intensify the legislative work in the coming weeks, following the recent agreement establishing shared competency between the Internal Market and Legal Affairs committees. Going forward, legislators will have the crucial task of hammering out a law that encourages technological innovation while addressing the concerns linked to some specific uses of AI.

As the premier global advocate for policies that advance competition and innovation worldwide, ITI supports many aspects of the Commission’s proposal. We have contributed to the debate on the AI Act with detailed comments to the European Commission proposal, with the aim to ensure that the new EU AI law becomes a future-proof and innovation-enabling tool that increases trust in this key technology. To help guide policymakers as they move forward with their work, we have developed five key recommendations:

1. Craft a Precise Definition of Artificial Intelligence

Given the lack of a universally-accepted definition of artificial intelligence, policymakers will have the key task of crafting a definition of AI that is precise and unequivocal. In the current proposal’s broad definition of AI, the references to logic- and knowledge-based approaches, or the reference to statistical approaches, could unintentionally include in the scope of the regulation many computer-based systems which are not generally considered consistent with classic definitions of AI. Policymakers should therefore look into narrowing this definition and excluding traditional software and control systems from the scope of the regulation. This will be beneficial for legal certainty and for facilitating compliance with the proposal’s requirements.

2. Limit the Scope of High-Risk AI Applications

A risk-based approach is key to ensuring the AI Act’s success as an enabler of innovation. To do so, the AI Act should aim to capture that minority of AI applications that concretely present a high risk, without stifling a rapidly emerging technology that offers enormous potential.

The proposal positively suggests that high-risk AI should be defined through narrow, scientific criteria based on likelihood and severity of the potential harm, as well as a plurality of potentially affected individuals. However, the current list of high-risk AI applications seems to encompass uses of AI that do not bear such a high level of risk, such as back-office uses in the context of management of critical infrastructure or AI used for assessing candidates performance in recruitment tests. In addition, for some uses in access to training, education, or creditworthiness evaluation, it is unclear how the criteria of severity of harm and plurality of individuals involved would apply. These are all undoubtedly significant uses, but they do not seem to pass the high-risk bar based on these scientific criteria. EU lawmakers should therefore consider narrowing down and clarifying the list of high-risk applications, making sure that the number of targeted AI uses reflects their exceptional nature.

3. Build in Goal-Oriented Requirements to Ensure Easier Implementation

The AI Act proposes extensive and prescriptive requirements for data governance, human oversight and transparency. This approach does not sufficiently take into account the wide diversity of applications and business models covered spanning from industrial machinery to financial services software making it difficult, if not impossible, for developers, and particularly smaller developers, to implement the obligations.

The notion of “error-free” datasets is a good example of where a prescriptive requirement clashes with a reality in which not only is it impossible to ensure “error-free” data, but in some cases errors may be useful for AI models to better perform their tasks. Similarly, prescriptive transparency, record-keeping and human oversight requirements may be difficult for the variety of businesses in scope to implement in the same way.

It would be more practical that these requirements focus on the goals rather than prescribing a process, i.e., establishing obligations to reach certain outcomes without prescribing how a certain goal should be achieved. This will ensure providers, users and all other relevant actors are able to apply the most meaningful and appropriate processes to comply with the AI Act, without weakening the goals of the regulation.

4. Encourage Reliance on International Standards

Global standards are fundamental for industry to address key governance aspects of the technology related to privacy, cybersecurity, and other areas, while ensuring the necessary interoperability of an inherently global technology. To that end, the AI Act should to the greatest extent possible ensure that requirements are grounded in truly international standards, as developing AI to such standards would ensure conformance with regulatory requirements.

While work on AI standardisation is taking place in European Standards Organisations, it is also taking place in international bodies located both in and outside of Europe. We therefore encourage the EU to use the AI Act as an opportunity to revisit elements of its existing policies that have been interpreted as mandating reliance on harmonised European standards – rather than a broader range of international standards as a primary means of demonstrating compliance with emerging regulatory requirements.

Similarly, provisions in the AI Act allowing the Commission to draw up technical specifications where it determines relevant standards are not available could cause global regulatory fragmentation, limit the tools for EU regulators to assess compliance, and risk making the EU an ‘island’ in the global AI market.

As a first mover and leader in AI governance, EU legislators should adopt a fully international approach to standardisation that will enable other jurisdictions to follow suit in a manner that does not detract from innovation or lead to unnecessary divergences. To achieve this,coordination with like-minded partners will be key, for instance in the context of the new EU-US Trade and Technology Council.

5. Ensure Wider Acceptance of Testing Performed in Third Countries

Conformity assessment for AI is a new field for which there is no commonly understood practice nor established infrastructure. This raises practical and logistical concerns regarding how testing bodies would carry out the conformity assessment called for in the AI Act, especially should the list of AI uses subject to third-party conformity assessment be broadened. These problems may create backlogs and reduce availability of AI-enabled products and services in the EU market.

While the AI Act proposal does reference acceptance of tests performed in third countries in presence of a non-specified “agreement,” it is important that such a mechanism is clarified and broadened. Restricting conformity assessment to bodies based in the EU could become a serious obstacle to a smooth rollout of the technology, and might inspire comparable localisation practices in third countries.

EU policymakers should thus look to build in the AI Act innovative paths for acceptance of test results developed by testing bodies based outside of the EU. This will help facilitating regulatory compatibility without detracting from the regulatory oversight of European authorities.

Public Policy Tags: Artificial Intelligence, Trade & Investment

Related