Blake Lemoine, the ex-Google engineer who said that AI had a soul, came out of silence and accepted an interview with the American media Futurism. In this one, he was able to share his perspective on the situation now that conversational tools like ChatGPT, Microsoft Bing and Google Bard have gone mainstream. Fired for his remarks, he had been working for several months on LaMDA, the code name of a new conversational tool based on artificial intelligence at Google.

In parallel with his discovery of the sensitivity of the machine’s responses, his work consisted of listing the comments that were biased and outside of Google’s policy on LaMDA, particularly in terms of morals and ethics. A good part of his discoveries were presented in the pages of the Washington Postbefore being fairly widely criticized by the scientific community, not sharing his alarmist remarks on the sensitivity of the machine.

Despite the controversy, his word is still very much listened to today as the engineer spent years at Google and knew, behind the scenes, the evolution of the development of artificial intelligence. In his new interview, he took the opportunity to explain in particular that what the general public discovered today was only a version of the conversational tools developed years ago, and that the future was much more advanced.

“Nothing came out in the last 12 months”

Blake Lemoine a dit que “nothing has come out in the past 12 months that I haven’t seen internally at Google”. Understand by this that over the past year, any announcement, new tool and new technology launched to the general public already existed internally at Google, and at the same level of depth and precision. “The only thing that has changed in two years is public adoption”he explained. “There is a real latency between what the public discovers and the technology, the strategy, the preparation for the release”.

The other big topic of conversation was the race between Google and OpenAI to release a conversational tool first. Google lost the race, the whole world was discovering consumer AI through ChatGPT, but according to Blake Lemoine, it’s a little bit because of him.

Bard could have been released in the fall of 2022. It would therefore have been released at the same time as ChatGPT, or even a little earlier. Then, partly because of the security concerns I raised, they pushed it back”he explained, after explaining that “Bard, who was not called Bard at the time, was already well advanced in mid-2021, and was evaluating its launch possibilities”.

In conclusion, OpenAI would not have changed the trajectory of Google, “It’s a media story”, he exclaimed. We remember, however, that when it was launched, Bard was far from offering a version as complete as OpenAI to its beta testers, and that Google had even launched a “red code” when ChatGPT was released.

The shadow of a “much more advanced” technology

Now that he’s left Google, ex-engineer Blake Lemoine is more open to talking about what he’s seen behind the scenes about artificial intelligence. And according to him, during all the time that Google, OpenAI and the other players in the field launched their consumer version of AI-based chatbots, even more sophisticated versions were in development. “Google has much more advanced technology that they haven’t made public yet”he replied in his interview.

This will be a problem eventually. On the issue of ethics, artificial intelligence will continue to be a safety issue for some users due to the machine’s ability to serve human needs, including psychological ones. “It is predictable that people will turn to these systems for help with various types of psychological stressors. Unfortunately, these systems are not designed to handle them well”, he warned. He ended by warning: “it is predictable that someone kills themselves after a conversation with an AI”.

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply