European Union policymakers on Friday passed a sweeping new law to regulate artificial intelligence, one of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology that has broad implications societal and economic.
The law, called AI Act, sets a new global benchmark for countries seeking to harness the potential benefits of technology, while trying to protect against its possible risks, such as job automation, spread of false information online and endangering national security. The law still needs to go through some final steps before being approved, but the political agreement means its broad outlines have been set.
European policymakers have focused on the riskiest uses of AI by businesses and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose AI systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that create manipulated images such as “deepfakes” should make it clear that what people see was generated by AI, according to EU officials and previous versions of the law.
The use of facial recognition software by police and governments would be restricted outside of certain safety and national security exemptions. Companies that break the regulations could face targets of up to 7% of their global sales.
“Europe has positioned itself as a pioneer, understanding the importance of its role as a global standard-setter,” said Thierry Breton, the European commissioner who contributed to the negotiations. the agreementsaid in a statement.
Yet even though the law was hailed as a regulatory breakthrough, questions remained about its effectiveness. Many aspects of this policy are not expected to come into effect for another 12 to 24 months, a considerable time frame for AI development. And until the last minute of the negotiations, policymakers and countries were arguing over their language and how to strike a balance between promoting innovation and the need to guard against possible harm.
The agreement reached in Brussels required three days of negotiations, including a first 22-hour session that began Wednesday afternoon and lasted until Thursday. The final agreement was not immediately made public because discussions are expected to continue behind the scenes to finalize technical details, which could delay final adoption. Votes Must be held in the Parliament and the European Council, which includes representatives from the 27 countries of the Union.
Regulation of AI has taken on urgency following last year’s release of ChatGPT, which became a global sensation by demonstrating AI’s advanced capabilities. In the United States, the Biden administration recently issued an executive order focused in part on the effects of AI on national security. Britain, Japan and other countries have taken a more hands-off approach, while China has imposed some restrictions on the use of data and recommendation algorithms.
The issue is billions of dollars in estimated value as AI is expected to reshape the global economy. “Technological domination precedes economic domination and political domination”, Jean-Noël Barrot, French Minister of Digital, said this week.
Europe has been one of the most advanced regions in AI regulation, having started work on what would become AI law in 2018. In recent years, EU leaders have attempted to bring a new level of oversight to the technology, similar to AI regulation. the health or banking sectors. The bloc has already enacted far-reaching laws related to data privacy, competition and content moderation.
An initial version of the AI law was published in 2021. But policymakers found themselves rewriting the law as technological advances emerged. The initial release made no mention of general-purpose AI models like those powering ChatGPT.
Policymakers agreed on what they called a “risk-based approach” to regulating AI, in which a defined set of applications are subject to the most oversight and restrictions. Companies that make AI tools that may cause the most harm to individuals and society, for example in recruitment and education, should provide regulators with risk assessment evidence, a breakdown of the data used to train the systems and assurances that the software has done so. do not cause harm by perpetuating racial prejudice. Human oversight would also be required during the creation and deployment of the systems.
Certain practices, such as blindly scraping images from the Internet to create a facial recognition database, would be banned outright.
The debate over the European Union has been contentious, a sign of how AI has confounded lawmakers. EU officials were divided on how much to regulate new AI systems, for fear of handicapping European start-ups trying to catch up with American companies like Google and OpenAI.
The law added requirements for manufacturers of the largest AI models to disclose information about the operation of their systems and assess “systemic risk,” Mr. Breton said.
The new regulations will be closely monitored globally. They will not only affect large AI developers like Google, Meta, Microsoft and OpenAI, but also other companies expected to use the technology in areas such as education, healthcare and banking. Governments are also turning more to AI in criminal justice and the allocation of public benefits.
The application remains unclear. The AI law will involve regulators from 27 countries and require the recruitment of new experts at a time when government budgets are tight. Legal challenges are likely as companies test the new rules in court. Previous EU legislation, including the landmark digital privacy law known as the General Data Protection Regulation, has been criticized for its uneven enforcement.
“The EU regulatory process is being called into question,” said Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, which has advised EU lawmakers on the AI law. “Without strict enforcement, this agreement will be meaningless.”