AI pioneer Geoffrey Hinton’s departure from Google and his warnings about the dangers of AI have caused quite a stir. Veteran critics of large language models and the companies that make and control them now accuse Hinton of ignoring and downplaying the already existing problems that large language models create. This is exemplified by Hinton’s lack of support for the AI ​​ethicist Timnit Gebru when she was fired by Google. In a television interview, Hinton justified this by saying that the criticism from Gebru and her colleagues was less existential than the fears that now haunt him.

What is striking, however, is that Hinton uses terms and figures of thought in his interviews that can be assigned to “effective altruism” – a movement that has a lot of influence in the USA, but which is also very controversial. For example, Hinton speaks of the “existential threat” to humanity from AI. Thinking about existential risks (xrisk) – the term was coined by the British philosopher Nick Bostrom – is just as much a part of this movement as the argument that intelligent AI will try to manipulate people in order to achieve their goals.


After studying physics, Wolfgang Stieler switched to journalism in 1998. He worked at c’t until 2005, after which he became editor of Technology Review. There he oversees a wide range of topics from artificial intelligence and robotics to network policy and questions of future energy supply.

Effective altruism (EA) is actually just a school of thought that tries to combine neoliberal economics and ethics. The basic premise is: there is too much misery, too many problems in the world. Not all of them can be solved. So how can the “scarce commodity” of possible aid be used as “profitably” as possible. This leads to a series of further – increasingly adventurous – conclusions.

One of them is “Earn to Give”. The idea: Because each person can only expend a limited amount of time and energy, it is ethically imperative as quickly as possible to make as much money as possible and then donate part of this money. Traditional ethical considerations such as “financial speculation is driven by greed and is wrong” are overruled by this principle.

From this it follows – unsurprisingly: Since the early 2000s, EA has developed into a movement, especially in Silicon Valley, that has a lot of money and therefore some influence because it attracted tech bros like Peter Thiel, Elon Musk or Sam Bankman-Fried has. At the same time, it not only provides organizational structures, but also an ideological core that justifies the actions of this group as justified, logical and ethically impeccable.

While the movement initially focused on “evidence-based” aid projects, an ideological branch called “longtermism” – a term for which there is still no good German translation – gained prominence. The idea behind it: Because significantly more people will live in the future than have lived up to now, maximizing human happiness means first and foremost securing the existence of mankind. ‘Cause if you believe Nick Bostrom, therein lies the destiny of mankindto spread intelligence in the cosmos. EA is thus in the tradition of technical utopias like transhumanism.

However, long-termism should not be confused with long-term thinking. Anyone who thinks that thinking about existential risks will lead to a determined fight against climate change is wrong. Since climate change is not expected to lead to human extinction, he is not considered an existential threat in EA circles. A nuclear war, a man-made pandemic, the eruption of a super volcano, cascading system failures and, of course, super-intelligence that has gotten out of control are certainly among the existential crises and can therefore be avoided at all costs – if humanity can do it . Because according to Hinton, a correspondingly intelligent AI can and will manipulate people in such a way that they gain more autonomy, an idea that from the so-called AI box experiment which has been discussed in xrisk circles since the 2000s.

Do these connections tend to bolster Hinton’s arguments or make them seem dubious? At the moment, only one thing seems certain: the discussion about the opportunities and risks of artificial intelligence that has been going on for more than 50 years is far from over. On the contrary: it has only just picked up speed.




(wst)

To home page

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply