More – Not Less – AI Development Will Further the Technology’s Benefits

While AI’s vast–in some cases almost human-like–capabilities can be unsettling, AI holds enormous potential for society. As AI transforms society in ways that were hard to imagine only a few years ago, calls for completely pausing its development are misguided. AI is an increasingly ubiquitous suite of technologies capable of learning, reasoning, adapting, and performing tasks in ways inspired by the human mind. It is part of our daily lives and is poised to transform the digital economy and industries, from healthcare to the environment to financial services.

As developers take steps to foster its responsible adoption and growth, the U.S. can lead in AI technology, create economic growth, and create new jobs and opportunities for all Americans. As such, it is increasingly important for governments to take an approach to AI grounded in transparency, ethics, collaboration, and a focus on people.

To capitalize on the transformational impact at hand—and maintain U.S. leadership in AI—our leaders need to increase federal investment in research and development (R&D) and take steps to foster greater adoption of AI solutions, not press pause on innovation. In doing so, it is important they ground policy activity in international standards and approach any regulatory efforts in a risk-based way, involving stakeholders throughout. As the tech industry’s global policy voice, ITI has spent years researching and refining governing principles and actionable ideas. ITI’s newly released “Harnessing AI: Recommendations for Policymakers” shows how leaders can support the deployment of AI technologies and advance tomorrow’s workforce while also managing risk.

Although legislation is not necessary to advance AI deployment in a safe manner, Congress should nonetheless take three concrete steps to advance AI deployment and innovation: ensure due diligence, enhance collaboration in the AI value chain, and assure protections.

First, Congress should examine existing laws, both federal and international, to fully understand how existing regimes protect consumers and businesses. Doing so avoids duplication and contradictory efforts and ensures we take account of international approaches and voluntary industry standards activity. Lawmakers must also understand the need for flexible, design-neutral, and context-specific regulations that avoid overly prescriptive algorithmic impact assessments. A risk-based approach is fundamentally important to enabling ongoing innovation, and measured regulatory attention should be dedicated to “high-risk” AI applications and uses accordingly.

Second, collaboration in the AI value chain is essential to foster benefits for all stakeholders in the AI ecosystem, including producers, providers, customers, developers, subjects, and partners. Such collaboration requires an understanding of the roles and responsibilities of stakeholders, especially when contemplating legislative approaches. This is especially relevant in the context of generative AI. In addition, any legislative approach should incorporate precise, complete definitions for AI and other terminology and seek to fund AI research and development that incentivizes partnerships and lab-to-market initiatives to ensure a diversity of stakeholders is represented. Resulting centers for excellence, innovation hubs, and research centers would further underpin industry’s crucial role in developing and deploying AI solutions—and in shoring up employment and growth opportunities.

Finally, AI policy must protect people. Protecting people includes reducing possible harms stemming from AI and ensuring that AI is robust, safe, and that bias is mitigated. Governments should consider how to best promote the development of meaningfully explainable AI systems as one way to foster accountability, which builds trust. Indeed, understanding how and why a system made the decision it did is critical to facilitating accountability. What’s more, AI-enabled systems can actually help to protect people in hazardous and critical situations. That includes early detection and treatment of illnesses and using AI to reduce safety risks in hazardous workplace environments like construction, coal mining, or petrochemical facilities. A recent Monmouth University poll found that 75 percent of people agree it is a “good idea” for AI to be used to help safely “perform risky jobs like coal mining.”

From semiconductor chips to modern AI, the technology industry has always been at the forefront of innovation. The industry also takes seriously the need to advance technology responsibly without halting development all together.

If we are committed to launching a responsible, transparent expansion of a suite of technologies bursting with potential, we must use AI to center people by protecting jobs and ideas, by making doing business easier, and by enabling environments that further innovation that can benefit everyone.

Public Policy Tags: Artificial Intelligence

Related