EU Lawmakers Struggle to Finalise Law to Regulate ChatGPT and Generative AI

Internet

As recently as February, generative AI did not feature prominently in EU lawmakers’ plans for regulating generative artificial intelligence (AI) technologies such as ChatGPT.

The bloc’s 108-page proposal for the AI Act, published two years earlier, included only one mention of the word “chatbot.” References to AI-generated content largely referred to deepfakes: images or audio designed to impersonate human beings.

By mid-April, however, members of European Parliament (MEPs) were racing to update those rules to catch up with an explosion of interest in generative AI, which has provoked awe and anxiety since OpenAI unveiled ChatGPT six months ago.

That scramble culminated on Thursday with a new draft of the legislation which identified copyright protection as a core piece of the effort to keep AI in check.

Interviews with four lawmakers and two other sources close to discussions reveal for the first time how over just 11 days this small group of politicians hammered out what could become landmark legislation, reshaping the regulatory landscape for OpenAI and its competitors.

The draft bill is not final and lawyers say it will likely take years to come into force.

The speed of their work, though, is also a rare example of consensus in Brussels, which is often criticised for the slow pace of decision-making.

Last-minute changes

Since launching in November, ChatGPT has become the fastest growing app in history, and sparked a flurry of activity from Big Tech competitors and investment in generative AI startups like Anthropic and Midjourney.

The runaway popularity of such applications led EU industry chief Thierry Breton and others to call for regulation of ChatGPT-like services.

An organisation backed by Elon Musk, the billionaire CEO of Tesla and Twitter, took it up a notch by issuing a letter warning of existential risk from AI and calling for stricter regulations.

On April 17, the dozen MEPs involved in drafting the legislation signed an open letter agreeing with some parts of Musk’s letter and urged world leaders to hold a summit to find ways to control the development of advanced AI.

That same day, however, two of them — Dragos Tudorache and Brando Benifei — proposed changes that would force companies with generative AI systems to disclose any copyrighted material used to train their models, according to four sources present at the meetings, who requested anonymity due to the sensitivity of the discussions.

That tough new proposal received cross-party support, the sources said.

One proposal by conservative MEP Axel Voss — forcing companies to request permission from rights holders before using the data — was rejected as too restrictive and something that could hobble the emerging industry.  

After thrashing out the details over the next week, the EU outlined proposed laws that could force an uncomfortable level of transparency on a notoriously secretive industry.

“I must admit that I was positively surprised on how we converged rather easily on what should be in the text on these models,” Tudorache told Reuters on Friday.

“It shows there is a strong consensus, and a shared understanding on how to regulate at this point in time.”

The committee will vote on the deal on May 11 and if successful, it will advance to the next stage of negotiation, the trilogue, where EU member states will debate the contents with the European Commission and Parliament.

“We are waiting to see if the deal holds until then,” one source familiar with the matter said.

Big Brother vs the Terminator

Until recently, MEPs were still unconvinced that generative AI deserved any special consideration.

In February, Tudorache told Reuters that generative AI was “not going to be covered” in-depth. “That’s another discussion I don’t think we are going to deal with in this text,” he said.

Citing data security risks over warnings of human-like intelligence, he said: “I am more afraid of Big Brother than I am of the Terminator.”

But Tudorache and his colleagues now agree on the need for laws specifically targeting the use of generative AI.

Under new proposals targeting “foundation models,” companies like OpenAI, which is backed by Microsoft, would have to disclose any copyrighted material — books, photographs, videos and more — used to train their systems.

Claims of copyright infringement have rankled AI firms in recent months with Getty Images suing Stable Diffusion for using copyrighted photos to train its systems. OpenAI has also faced criticism for refusing to share details of the dataset used to train its software.

“There have been calls from outside and inside the Parliament for a ban or classifying ChatGPT as high-risk,” said MEP Svenja Hahn. “The final compromise is innovation-friendly as it does not classify these models as ‘high risk,’ but sets requirements for transparency and quality.”

© Thomson Reuters 2023


Smartphone companies have launched many compelling devices over the first quarter of 2023. What are some of the best phones launched in 2023 you can buy today? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Articles You May Like

Asus ROG Phone 9 Pro and OnePlus 13 With Snapdragon 8 Elite Deliver Impressive Results in Early Battery Test
Google Must Sell Chrome to Restore Competition in Online Search, DOJ Argues
Indonesia wants Apple to sweeten its $100 million proposal as tech giant lobbies for iPhone 16 sales
Sony Said to Be in Talks to Buy Elden Ring Maker FromSoftware’s Parent Company
Samsung Galaxy Book 5 Series Listed on BIS, FCC, Energy Star Certification Websites: Report