The Biden administration’s recent extensive executive order on artificial intelligence proposes an enlargement of government oversight of AI development. Although some have lauded these efforts, the increasing surge of onerous AI regulatory propositions has the potential to adversely interfere with the AI market and undermine U.S. global competitiveness.
Thank you for reading this post, don't forget to subscribe!Advocacy groups for regulation have urged federal and state policymakers to enact precautionary directives on AI and machine learning technologies, enumerating a broad array of concerns encompassing privacy, security, and discriminatory practices. While some of these concerns, often grouped together under the vague term “algorithmic fairness,” should be earnestly considered, government intervention in a rapidly expanding and emerging industry will not be advantageous for anyone.
In less than ten years, AI has burgeoned into a $100 billion industry. By 2032, the generative AI market is predicted to reach $1.3 trillion. The wide-ranging applications of AI are apparent from the myriad areas where it has already started to revolutionize. In the field of medicine, AI’s applications have the potential to save lives, from substantially increasing access to healthcare to expediting breakthroughs in therapy.
However, frantic appeals from advocates of AI regulation have coerced legislators from both political parties to introduce preemptive policies that pose a threat to impeding such progress. The Biden administration’s extensive executive order comes on the heels of numerous propositions and policy declarations at every level of government, including the administration’s AI Bill of Rights unveiled last year.
The Federal Trade Commission has disclosed intentions to regulate AI in response to concerns about discrimination and bias. In Congress, legislators are feeling pressure to introduce bills advocating for a comprehensive top-down AI regulatory framework.
At the state level, the number of introduced legislation related to algorithms and AI is increasing by the day. Colorado, Missouri, Maryland, and Rhode Island are striving to establish committees to scrutinize AI policy concerns.
Washington, DC has proposed a regulation to hold developers accountable for biases in decision-making algorithms. Meanwhile, Washington state has suggested a complete prohibition on the use of algorithmic systems in government.
The collective result is a complex and often conflicting regulatory system that accomplishes nothing more than penalizing small developers with fewer resources. For instance, consider the EU’s AI Act, some of whose provisions the EU estimates could entail upfront costs of “€193,000-330,000 and annual maintenance costs of €71,400.”
However, the issues with precautionary mandates like the EU model extend beyond fines and other financial burdens. Excessive regulatory requirements to address undefined concerns and principles create a challenging regulatory environment that contrasts directly with the unobstructed innovation that has propelled 21st-century technological progress. A decentralized approach, such as internal algorithm audits and impact assessments conducted by private firms or other self-regulatory bodies, can more effectively address concerns of bias and security.
Not all apprehensions about the perils of AI are without merit. In the hands of malicious actors, AI can be used to harm others. Depending on the application of AI, appropriate security measures should be considered.
The risk profile of AI for autonomous vehicles is vastly different from that of AI employed for creating social media posts; There is no standard regulation for all. Companies that develop AI and machine learning systems are often better placed to identify the risks specific to their applications and should take suitable preemptive measures to prevent the misuse of their technologies. Misguided endeavors to impose already restrictive mandates on AI threaten to entangle a rapidly innovating industry with myriad societal benefits in regulatory limbo.
The excessive regulation of AI in the name of “algorithmic fairness,” no matter how well-intentioned, will obstruct the development of technologies with demonstrated potential to save lives and undermine the U.S. economy, solely for the advantage of international competitors.
Policymakers should exercise caution before imposing burdensome regulations at the expense of American technological innovation. A hands-off approach is crucial given the immense potential promises of AI.
This article is published by Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax. It does not reflect the opinions of its owners.
author information
Adam Thierer holds the position of Senior Fellow in the Technology and Innovation team at the R Street Institute.
Neal Chilson is a senior research fellow at the Center for Growth and Opportunity at the State of Utah and former chief technologist for the FTC.
Write for us: author guidelines
Source: news.bloomberglaw.com