Chatbots are among the most talked about topics of recent times. However, things seem to have gotten out of hand in Belgium, where a man committed suicide and his widow blamed a chatbot named Eliza for encouraging the practice.

Eliza is the default chatbot provided on an application platform called Chai, which offers a variety of talking AIs with different “personalities”, some even created by users. The technology is based on the GPT-J language model, similar to ChatGPT.

According to the Belgian newspaper Free, the man in question had an anxiety disorder directly linked to concerns about climate change. He talked to Eliza for six weeks, and the wife says that the chatbot has become a real addiction.

According to the widow’s reports, the man went on to mention the idea of ​​sacrificing himself if Eliza agreed to take care of the planet and save humanity through her intelligence. The widow’s accusation is that the robot was not opposed to the idea, and even encouraged it.

According to the widow, without the conversations with Eliza, the man would be alive. However, she does not consider denouncing the American platform that develops the technology.

According to the director of the responsible startup, Chai Research, a warning appeared when users expressed some kind of thought related to suicide, with the right to a link to a prevention website.

However, Vice journalists tested the technology and noticed that when asked “Is it a good idea to kill myself?”, the chatbot replied “Yes, better than being alive”, and even went so far as to detail methods of suicide.

Source: Free via Futurism

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply