The rapid development of artificial intelligence (AI) has given rise to calls for urgent regulation. This is what some countries are doing or planning to do.
Rapid advances in artificial intelligence (AI) such as Microsoft-backed OpenAI’s ChatGPT are complicating governments’ efforts to agree on laws governing the use of the technology.
Advertisement
Here are the latest steps national and international governing bodies are taking to regulate AI devices:
Australia
* Seeking input on regulations
A spokesperson for the Minister for Industry and Science said in April that the government was consulting with Australia’s main science advisory body and considering next steps.
China
*Temporary rules were implemented
China has issued a set of temporary measures to manage the generative AI industry, effective Aug. 15, requiring service providers to submit security assessments and obtain approval before releasing AI products on a large scale .
Following government approval, four Chinese tech companies including Baidu Inc and SenseTime Group launched their AI chatbots to the public on August 31.
European Union
*Planning Rules
EU lawmakers agreed to changes to the bloc’s draft AI act in June. Lawmakers now have to hammer out details with EU countries before the draft rules become law.
The biggest issue is expected to be facial recognition and biometric surveillance where some lawmakers want a complete ban while EU countries want exceptions for national security, defense and military purposes.
France
* Investigating potential violations
France’s privacy watchdog CNIL said in April it was investigating multiple complaints about ChatGPIT after the chatbot was temporarily banned in Italy due to suspected violations of privacy rules.
Advertisement
France’s National Assembly approved the use of AI video surveillance during the 2024 Paris Olympics in March, ignoring warnings from civil rights groups.
g7
* Seeking input on regulations
The Group of Seven (G7) leaders meeting in Hiroshima, Japan in May acknowledged the need for governance of AI and immersive technologies and called on ministers to discuss the technology as the “Hiroshima AI Process” and report results by the end of 2023. It was agreed to report.
G7 countries should adopt “risk-based” regulation on AI, G7 digital ministers said after a meeting in April.
Ireland
* Seeking input on regulations
Advertisement
Generic AI needs to be regulated, but Ireland’s data protection chief said in April that governing bodies should work out how to do it properly before banning it “really isn’t going to stick”.
israel
* Seeking input on regulations
Ziv Katzir, director of national AI planning at the Israel Innovation Authority, said in June that Israel has been working on AI rules “for the last 18 months” to achieve the right balance between innovation and the protection of human rights and civilian safety measures. ,
Israel published a 115-page draft AI policy in October and is collecting public feedback before a final decision.
Italy
* Investigating potential violations
Advertisement
Italy’s data protection authority plans to review other artificial intelligence platforms and hire AI experts, a top official said in May.
ChatGPT became available to users in Italy again in April after being temporarily banned due to concerns by the national data protection authority in March.
Japan
* Investigating potential violations
An official close to the discussions said in July that Japan hopes to introduce rules by the end of 2023 that are closer to the US approach than stricter rules planned in the EU, as it seeks to boost economic growth and Focusing on technology to make it a stronger one. Leader in advanced chips.
The country’s privacy watchdog said it had warned OpenAI in June not to collect sensitive data without people’s permission and to minimize the amount of sensitive data it collects.
spain
* Investigating potential violations
Spain’s data protection agency said in April it was opening a preliminary investigation into possible data breaches by ChatGPT. It has also asked the EU’s privacy watchdog to evaluate privacy concerns around ChatGPT.
UK
*Planning Rules
The Financial Conduct Authority, one of several state regulators tasked with drafting new guidelines covering AI, is consulting with the Alan Turing Institute and other legal and academic institutions to improve its understanding of the technology. Is, a spokesman told Reuters.
Britain’s competition regulator said in May it would launch an investigation into the impact of AI on consumers, businesses and the economy and whether new controls are needed.
United Nations
*Planning Rules
The UN Security Council held its first formal discussion on AI in New York in July. UN Secretary-General Antonio Guterres said the council addressed both military and non-military applications of AI, which “could have very serious consequences for global peace and security.”
In June Guterres supported a proposal by some AI officials to create an AI watchdog such as the International Atomic Energy Agency, but said “only member states can create one, not the UN Secretariat”.
The UN Secretary-General has also announced plans to begin work by the end of the year on a high-level AI advisory body to regularly review AI governance arrangements and offer recommendations.
We
* Seeking input on regulations
Washington DC District Judge Beryl Howell ruled on August 21 that works of art created by AI without any human input cannot be copyrighted under US law, in a lawsuit filed by computer scientist Stephen Thaler on behalf of his DABUS system. Confirmed rejection of an application by the Copyright Office.
The US Federal Trade Commission (FTC) launched a broad investigation into OpenAI in July over claims that it violated consumer protection laws by putting personal reputations and data at risk.
Generative AI raises competition concerns and is a focus area of the FTC’s technology bureau as well as its technology office, the agency said in a blog post in June.
Senator Michael Bennet wrote to major tech companies in June urging them to label AI-generated content and limit the spread of content that misleads users. He introduced a bill in April to create a task force to look at US policies on AI.