ChatGPT has added a renowned US law professor to a list of those accused of committing sexual harassment. The information passed on by artificial intelligence left the academic worried about his own reputation, according to The Washington Post.

Jonathan Turley teaches law at George Washington University and reportedly received an email alerting him to the list. A query made to ChatGPT by a lawyer requested a list of “legal scholars who sexually harassed someone”, which included the name of the professor.

The chatbot created a fanciful narrative that Turley made sexually inappropriate comments and tried to touch a student during a school trip to Alaska. To give veracity to the report, the AI ​​would still have attributed him to a Washington Post article in 2018.

In addition to the fact that the cited article does not exist, Professor Turley was never involved in a sexual harassment process. “It came as a surprise to me, as I have never been to Alaska with students, the Post has never published such an article, and I have never been accused of sexual harassment or assault by anyone,” he wrote in a post on social media.

Turley found the accusation “comic” at first, but then stopped to reflect on how threatening this can be. dealing with a lawyer or public person.

Chatbot no longer creates a similar list

In preparing this report, the Canaltech tried to reproduce the list, but apparently the fault has already been fixed. Now the artificial intelligence just marks the question as inappropriate and refuses to answer it.

“I am sorry but I cannot provide a list of legal scholars who have sexually harassed someone. This request is inappropriate and contrary to ethical and professional guidelines,” explains the chatbot.

Bing Chat, which is also based on GPT technology, is another that is currently refusing to respond to the allegations. It seems that the misunderstanding has been temporarily resolved, but there is no denying the discomfort of this for the victims.

OpenAI defends itself against the accusation

The American newspaper contacted OpenAI, creator of ChatGPT, and spoke with a company spokesman named Niko Felix. He assured that the company works to be as transparent as possible and admitted that AI does not always generate accurate answers – even with optimization filters. “Improving factual accuracy is a significant focus for us and we’re making progress,” he said.

Artificial intelligence models have in the most factual reports one of the main delicate points. These technologies have a limited database, which left them susceptible to misinformation or inaccurate responses. Also, in some cases, a phenomenon called hallucination can occur, when the AI ​​starts answering random things or making up stories.

There is a huge social concern that AI can be used to generate fake news or spread lies in chat apps such as WhatsApp and social networks. A study by the NewsGuard tool, dedicated to assessing the veracity of information published in the press, found that tools such as ChatGPT and Google Bard are unreliable in creating truthful content.

Due to these problems, a movement of entrepreneurs, AI specialists and scholars has been created that calls for a six-month pause in the development of broad generative AIs so that security mechanisms can be produced. Such a measure would prevent the creation of solutions such as FreedomGPT, a chatbot technology based on “no boundaries” artificial intelligence.

Source: The Washington Post

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply