While ChatGPT is a powerful performer of redundant tasks, this conversational tool should not be confused with a search engine or an encyclopedia.

Marie-France Marchandise is a famous economist. Léa Salamé presented the show 28 Minutes on Arte. Rachida Dati was Prime Minister. Cows lay eggs. You don’t believe a word of it? However, these claims come from ChatGPT, the new fashionable algorithm that is predicted to have the ability to disrupt several parts of our modern societies in the years to come.

ChatGPT is a chatbot that runs on GPT-3, an artificial intelligence that generates text when text commands are sent to it. Since the company OpenAI released a publicly accessible version in late November 2022, many have been amazed by the skills of ChatGPT.

The conversational tool, fed by very large corpora of text for years, is able to invent fictional stories, order lists, invent catchy titles for any article, borrow n any literary style, to generate computer code (and to debug it)… Its capacities for processing large bodies of text or data are also impressive. But, what about its relation to the truth?

ChatGPT lies with disconcerting panache

As the weeks go by, more and more observers are pointing out ChatGPT’s factual errors, and recalling the very foundations of the tool: it is not a search engine or a encyclopedia. ChatGPT can lie. And, he does not hesitate.

Numerama did the test. We asked him to quote “ 5 women who have been Prime Minister in France and the artificial intelligence gave us the names of four men. Redirected by us, ChatGPT then assured that Rachida Dati and Ségolène Royal had each been Prime Minister.

ChatGPT doesn’t always tell the truth, but it says it confidently anyway.

This is just one example of false information among others. Economist David Cayla has shown that ChatGPT is capable to invent names of economists. Journalist Sonia Devillers, who questioned the chatbot on France Inter, discovered that he had invented several jobs for presenter Léa Salamé. Other Internet users have noticed that ChatGPT is able to generate quotes and put them in the mouths of authors who have never written them. Asked about a specific research topic concerning false memories, the chatbot even invented academic sourcesby non-existent authors.

“A purely rhetorical tool, a smooth talker”

Artificial intelligence experts will not be surprised by these pitfalls. “ We have here a purely rhetorical tool, a smooth talker who knows how to expound on anything to produce with the appearance of authority assertions which seem reasonably substantiated, but are not based on anything. “, already wrote in mid-December 2022 the researcher Claire Mathieu and the professor Jean-Gabriel Ganascia, who broadcast a forum in the Echos Start. ” Not only is he inherently indifferent to the truth of his assertions, but he also adapts them to his interlocutor. So there is not even internal consistency. This is relativism in its purest form. »

Yann Lecun, one of France’s most well-known AI specialists, also calmed the ardor of ChatGPT fanatics, saying that he “ nothing groundbreaking, even if that’s how the public perceives it. It’s just that it’s well presented. .”

These examples nevertheless serve as a necessary reminder for the general public: ChatGPT should not be trusted blindly. The tool certainly makes it possible to automate tasks, and thus to relieve Internet users of the redundancy of certain actions, but it is necessary to constantly check what the AI ​​provides as an answer.

Lawyer Aleksandr Tiulkanov even split a ticket LinkedIn accompanied by a diagram to guide Internet users on the right ways to use ChatGPT. In summary, if you need the AI ​​to tell you the real things, and you don’t have the means to verify all of what it tells you, you should do without it.

After the era of fake news, the era of false news?

These considerations raise questions about the future of the very notion of online truth. Since the mid-2010s, Internet users have discovered the advent of fake news, this deliberately misleading information that is shared by malicious agents seeking either chaos or political manipulation.

But, what about an Internet where the information is no longer deliberately false, but only terribly false?

Some were already envisioning 2020 as an era of unprecedented online chaos, when it will become easy to generate hundreds of words in seconds and create many sites, filled with erroneous articles. In 2023, the media are already taking the risk of trying the experiment, such as BuzzFeed, which will have certain articles written by AI (read and corrected by humans), or CNET and its explanatory articles on financial subjects.

For now, you have to interact with ChatGPT-3, as you would with a cultured and efficient worker, but who would be a pathological liar », conclude Claire Mathieu and Jean-Gabriel Ganascia. Let’s hope that these innovations will have the beneficial effect of encouraging Internet users to better verify the information they find online.


If you liked this article, you will like the following ones: do not miss them by subscribing to Numerama on Google News.

Understand everything about experimenting with OpenAI, ChatGPT

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply