The European project for the regulation of artificial intelligence passed a crucial stage on Thursday 11 May by obtaining a first green light from MEPs who called for new bans and better consideration of the ChatGPT phenomenon.
In Japan, ChatGPT becomes an administrative agent
The European Union wants to be the first in the world to adopt a comprehensive legal framework to limit the excesses of artificial intelligence (AI), while securing innovation. Brussels proposed an ambitious draft regulation two years ago, but its examination is dragging on, delayed in recent months by controversies over the dangers of generative AI capable of creating text or images.
The sequel after the ad
“Europe wants an ethical, human-based approach”
EU member states only defined their position at the end of 2022. MEPs endorsed theirs in a vote in committee on Thursday morning in Strasbourg, which will have to be confirmed in plenary in June. A difficult negotiation will then begin between the different institutions.
“We received over 3,000 amendments. Just turn on the TV, every day we see the importance of this file for the citizens “said Dragos Tudorache, co-rapporteur of the text. “Europe wants an ethical, human-based approach”summed up Brando Benifei, also co-rapporteur.
Does ChatGPT mark a new step towards thinking robots?
Of great technical complexity, artificial intelligence systems fascinate as much as they worry. While they can save lives by enabling a quantum leap in medical diagnosis, they are also exploited by authoritarian regimes to exercise mass surveillance of citizens.
The general public discovered their immense potential late last year with the release of California-based OpenAI’s editorial content generator ChatGPT, which can write original essays, poems or translations in seconds. But the dissemination on social networks of false images, more real than life, created from applications like Midjourney, alerted to the risks of manipulation of opinion.
The sequel after the ad
The human must be in control
Scientific personalities have even called for a moratorium on the development of the most powerful systems, until they are better regulated by law.
Parliament’s position broadly confirms the Commission’s approach. The text draws on existing regulations on product safety and will impose checks based primarily on businesses.
The heart of the project consists of a list of rules imposed only on applications judged to be “high risk” by the companies themselves based on the legislator’s criteria. For the European executive, this would be all the systems used in sensitive areas such as critical infrastructure, education, human resources, law enforcement or migration management…
Will artificial intelligence read our minds tomorrow?
Among the obligations: provide for human control over the machine, the establishment of technical documentation, or even the establishment of a risk management system.
The sequel after the ad
Their compliance will be monitored by designated supervisory authorities in each member country.
Data protection and rare prohibitions
MEPs want to limit the obligations only to products likely to threaten security, health or fundamental rights.
The European Parliament also intends to better take into account generative AIs of the ChatGPT type by calling for a specific regime of obligations which essentially repeat those provided for high-risk systems.
MEPs also want to force providers to put in place protections against illegal content and to reveal the data (scientific texts, music, photos, etc.) protected by copyright and used to develop their algorithms.
The sequel after the ad
ChatGPT, my love
The Commission’s proposal, unveiled in April 2021, already provides a framework for AI systems that interact with humans. It will oblige them to inform the user that he is in contact with a machine and will force the applications generating images to specify that they were created artificially.
Bans will be rare. They will concern applications contrary to European values such as the citizen rating systems or mass surveillance used in China.
MEPs want to add a ban on emotion recognition systems and remove the derogations authorizing the remote biometric identification of people in public places by law enforcement.
They also intend to prohibit the mass harvesting of photos on the internet to train algorithms without the consent of the persons concerned.