From ChatGPT to Midjourney, “generative artificial intelligence” algorithms are advancing at breakneck speed. But from there to creating an AI endowed with consciousness, there is still a long way to go – if there is such a thing.

When we say “artificial intelligence”, what do you think of? If you follow industry news, maybe at ChatGPT or Midjourney. But in the collective imagination, AI is more often associated with humanoid beings of I, Robotto the computer of 2001 A Space Odysseyor replicants of Blade Runner. Their common point: conscious AIs, capable of setting their own objectives.

This scenario has belonged to science fiction for decades. But today, the AI ​​industry is booming. From ChatGPT to Midjourney to music and video creation AIs, “generative artificial intelligence” programs are seeing their capabilities explode.

To the point that many are wondering: would a “conscious” AI worthy of science fiction scenarios finally be possible? Is it a serious goal, a threat to be avoided at all costs, or an unrealistic fantasy?

“The holy grail of a certain aspect of research”

In the sector, this idea has a name: “artificial general intelligence” (or AGI in English). It is that of Terminator or Robocop: an AI capable of doing as well or even much better than a human in a whole bunch of areas, and perhaps, of having a conscience.

“It’s the holy grail of a certain part of AI research”, explains to Tech&Co Thomas Wolf, co-founder of Hugging Face, a central platform in the recent explosion of generative AIs.

This blurry lens, almost as old as the term “artificial intelligence” itself, has long been relegated to fantasy. But it is now openly claimed by some of the biggest companies in the industry, including OpenAI, the organization behind ChatGPT.

If created, the AGI “could help us uplift humanity by increasing abundance, accelerate the global economy, and aid in new scientific discoveries,” writes Sam Altman, the boss of OpenAI. In particular, he believes that the AGI could help to fight against global warming or to colonize space – in addition to provoking deep philosophical questions about human nature.

Is this a realistic goal? A quick glance might suggest that we are getting closer. Generative AIs show a creativity that until now was thought to be reserved for humans. Videos of humanoid robots plugged into ChatGPT and voice assistants inevitably invoke comparisons with I Robot. And variants like Auto-GPT are touted as being able to figure out the intermediate steps that need to be completed on their own to reach an end goal – for the time being still set by a human.

A conscious… and rebellious AI?

But such a revolution could come with many risks: massive replacement of jobs, concentration of power in the hands of its creator… And what would happen if a conscious AI decided to set its own goals? If we create an AI whose goal is to maximize the production of paperclips, it could very well conclude that the best way is… to plunder the entire natural resources of the planet and eliminate the humans who could unplug it.

This deliberately caricatural example (in English) by transhumanist philosopher Nick Bostrom justifies research into “alignment”: how to make AIs act in accordance with societal values.

But some believe that the current development of AI is too unbridled to ensure the implementation of such safeguards, and call for a slowdown in research, such as the signatories of the call for a six-month moratorium (including Elon Musk and renowned researchers like Yoshua Bengio). Others go even further in radicality by calling for the bombardment of data centers that do not obey the limits on the development of AI, such as blogger Eliezer Yudkowsky in Time.

“They won’t break their chains”

However, nothing says that science is heading in this direction. Already because behind the term “general artificial intelligence”, there is no clear definition accepted by all. “‘General’ intelligence does not exist!” Reminds Tech&Co Jean-Gabriel Ganascia, researcher at the Paris VI Computer Science Laboratory and specialist in artificial intelligence.

“Intelligence is the set of cognitive abilities. However, machines have been surpassing us for a very long time on certain tasks such as calculation”, recalls the researcher.

So what would an AGI do? Consciousness? We are far from it: the current generative “artificial intelligences” do not have an ounce of consciousness or reflection. “ChatGPT only generates the most probable text in relation to the user’s request, based on the billions of texts that have been used to train it”, explains to Tech&Co Alexandre Lebrun, co-founder of several startups using chat systems. ‘artificial intelligence.

“ChatGPT does not ‘think’ like a human at all”, insists the CEO of Nabla, a startup using AI in the field of health.

As ChatGPT is trained from human texts, the texts it generates may give the impression of human reflection – but that is just an impression. And nothing says that this barrier could be lifted in the future. “It is not because we know how to jump from one meter that we know how to go to the Moon”, summarizes Alexandre Lebrun to Tech&Co.

“Current generative AIs can give the impression of consciousness, but if we want to achieve true consciousness, we may have to completely rethink the method,” says the entrepreneur.

The AI ​​that would rebel against its creators should also remain a fantasy. “Current AIs are static programs, it is humans who decide when and how they are trained or updated”, recalls Thomas Wolf.

“They are not going to break their chains like in science fiction scenarios”, assures the entrepreneur to Tech&Co.

“There is no doubt that they will exist one day”

However, it is not because the method does not yet exist that it will never see the light of day. “I have no doubt that superhuman AIs will exist one day,” says on Twitter Yann Le Cun, one of the greatest AI researchers in the world. But for the scientist, we just don’t have the right technique yet, and humanity should succeed in “aligning” these AIs in the long term.

“We are the ones who design and set the objective functions of AIs. This makes the alignment of AIs much easier than that of humans or companies”, says the researcher on Twitter.

For others, the scenarios based on “general AI” and “existential risk” are rather red rags waved by the big companies in the sector, to divert attention from much more current issues. Among them, “worker exploitation”, “massive data theft”, the proliferation of fakes and the concentration of power in the hands of a few companies, as listed an open letter written by leading thinkers on ethics applied to AI.

In the meantime, the capabilities of generative AIs continue to advance – but they still obey humans. “Generative AIs are not Terminators; what needs to be regulated is how people use them,” Thomas Wolf said.

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply