Furthering AI Leadership Through U.S. National AI Safety Institute

Artificial Intelligence has already demonstrated its transformational impact on society. From finding breakthroughs in health care to safeguarding financial operations to addressing the climate crisis, AI is a key part of the solution to many of society’s greatest challenges. Further, advancing this technology will continue to create economic growth and opportunities for people across the globe. Responsible and safe deployment of AI is essential to this uptake and growth. Steadfast commitment and coordination among industry experts, government, and relevant stakeholders is crucial to advance AI in a way that harnesses its power while also addressing risks and ensuring its safety.

The Biden Administration’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, released in October 2023, balances this important need. ITI was encouraged to see the establishment of the U.S. AI Safety Institute following the Executive Order’s release. The Institute, which will be led by the National Institute of Standards and Technology (NIST), is intended to support the execution of research, evaluation, and standards on AI safety and trust, supporting key efforts to establish a trustworthy foundation for AI as envisioned in the Executive Order.

Recognizing that trust in AI is a shared responsibility among developers, deployers, industry, civil society, and policymakers, the AI Safety Institute would leverage public and private expertise to develop a roadmap for some of the most advanced types of AI technology. Coordinating with technical experts, academia, and industry will help ensure that guidelines, tools, and standards effectively protect consumers, mitigate potential harms, and provide consistency and compatibility.

Further, as countries around the world seek to develop governance approaches for AI, collaboration among governments is key to ensuring those approaches are interoperable to the extent possible. In this way, the AI Safety Institute and the consortium set up to facilitate this collaboration can play an instrumental role in aligning evaluations and guidance with global partners and allies. Harmonization across the U.S. government is just as critical. To that end, the AI Safety Institute can work with various federal agencies to realize and maintain a unified and consistent approach to addressing AI’s benefits, risks, and impact.

Given its expertise and long-standing leadership on AI, including through the development of the AI Risk Management Framework, NIST is well positioned to lead, in collaboration with industry, this effort to advance AI safety. Driving the strategic development and deployment of AI technologies by funding research-focused institutes that promote safe adoption can enable the U.S. to more effectively partner with global allies like the UK. Earlier this year, the UK convened global governments, tech industry representatives, and other stakeholders to discuss and advance outcomes on AI safety and unveiled its own UK AI Safety Institute, and just last week, the EU advanced its own AI Act.

Most importantly, for the AI Safety Institute to succeed, it needs appropriate resourcing. As the U.S. Congress considers policies related to AI, funding this initiative should be a priority. This is a critical step for the U.S. to remain globally competitive and preserve its position as an innovative leader. We urge Congress to continue to work with industry to address AI safety and support existing and future federal investments in science-driven research.

Public Policy Tags: Artificial Intelligence

Related