The development of artificial intelligence to power science, medicine, general research, manufacturing and other sectors could play against humanity, which according to an investigation by the University of Oxford and Google could perish at the hands of AI.
The study published in AI Magazine by the team of scientists made up of Marcus Hutter, senior at DeepMind, and researchers Michael Cohen and Michael Osborne from Oxford, concludes that a superintelligent artificial intelligence “probably” can annihilate humans.
Cohen tweeted about it: “Under the conditions we have identified, our conclusion is much stronger than any previous publication: an existential catastrophe is not only possible, but probable”.
The scientists’ argument
The researchers argue that humanity could meet its fate in the form of super-advanced “misaligned agents” who perceive people standing in their way for “a payoff.”
The text states: “A good way for an agent to maintain long-term control of their reward is eliminate potential threats and use all available energy to ensure their conquest”.
The scientists recommend that the expert community make slow progress in artificial intelligence technologies, a constantly growing sector of science.
“In a world with infinite resources, I wouldn’t be too sure what would happen. In a world with finite resources, there is inevitable competition for these resources.. Losing this game would be fatal”, adds Cohen dramatically.
The study imagines that life on Earth becomes a competition between humanity, with its demands to produce food and maintain electricity, and highly developed machines, which would want to take advantage of all resources to ensure their reward and protect themselves against the increasing humanity’s attempts to stop that.
In response to this threat, humanity must only carefully and slowly progress its AI technologies to avoid catastrophe.