American startup OpenAI’s chatbot is proving so effective that it could be easily used to trick readers.

Acclaimed then criticized… Since its appearance last December, the artificial intelligence text generator ChatGPT has gone through all the states, proof of the importance of its impact.

The chatbot, developed by the American startup OpenAI, to whom we also owe DALL-E, was so impressed by its accuracy that it ended up worrying many observers. Starting with academics who see it as an easy way to write dissertations, dissertations or theses, at least in part.

But in the same way that OpenAI tries to correct its chatbot to prevent it from reproducing racist or sexist biases in society, the company also intends to fight against potential abuses.

An imperceptible signal

“Basically, whenever GPT generates long text, we want there to be an imperceptible secret signal in its word choices, which you can use to later prove that, yes, it’s from GPT,” explained in a recent post. conference Scott Aaronson, an MIT researcher who recently joined the startup.

“It could be useful in preventing academic plagiarism, obviously, but also, for example, the mass generation of propaganda – you know, spamming every blog with comments supporting Russia’s invasion of Ukraine, without even needing of a building full of trolls in Moscow. Or impersonating someone’s writing style in order to incriminate them” lists the researcher.

If the latter does not explain concretely how a university or even an individual could verify the presence of this marker, he ensures that the signal will be sufficiently complex and effective to identify simple ends of sentences added to a text. A way to correct one of the biases of ChatGPT.

Thomas Le Roy Journalist BFM Business

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply