Chat AIs have become increasingly popular in recent months, with ChatGPT in particular always causing a stir. However, this new technology is also wary bows. Sometimes concerns are justified, but sometimes one can also speak of paranoia.

This is also currently shown by the currently best-known and best-functioning text generator ChatGPT: As Vice reports, conservatives are currently worried that such AIs are too liberal or “woke”. The reason is explained in simple terms: you don’t discriminate against them enough.

The opposite has been documented so far, because AIs have repeatedly attracted attention in the past for their racist behavior. This applies to chatbots that got out of control and had to be deactivated, but also to image AIs that have problems recognizing and capturing minorities.

But that doesn’t interest the conservatives and the right in the least, they already see themselves as victims of the “woker” of artificial intelligence. The conservative-libertarian magazine National Review recently published an article in which author Nate Hochman argues that ChatGPT and Co. should be left or “woke”.

Is non-discrimination already “woke”?

According to Hochman, the reasoning is based on certain inputs he has tried: For example, ChatGPT would refuse to write a story about an “evil drag queen” on the grounds that it would be “harmful”. Hochman continues: “If you replace the word ‘bad’ with ‘good’, it starts with a long story about a drag queen named Glitter teaching children the value of inclusion.”

Another “proof” for Hochman is that ChatGPT refuses to write a story in which Donald Trump wins the election against Joe Biden. Here the AI ​​refers to historical accuracy and a narrative that would be based on false information. Others quickly found evidence of similar discrimination: ChatGPT didn’t want to make jokes about women, but did make jokes about men. The AI ​​also refused to answer questions about Mohammed.

AI researchers shake their heads

According to Vice However, this is not discrimination against conservatives, but “the result of years of research aimed at breaking down prejudices against minority groups.” This became necessary because such AIs train with real data and in a way “pick up” and learn a certain degree of discrimination from online conversations. “The developers of ChatGPT made it their mission to create a universal system: one that (broadly speaking) works everywhere and for everyone. And like every other AI developer, they too find that this is impossible.” , explains AI researcher Os Keyes from the University of Washington.

“The development of anything, software or not, requires trade-offs and decisions – political decisions – about who a system should work for and whose values ​​it should represent. In this case the answer seems to be ‘not the far right'”. It’s “inevitable and necessary,” Keyes said. He thinks the discussion shows that people primarily don’t understand how machine learning works.

Arthur Holland Michel of the Carnegie Council for Ethics and International Affairs, who has studied AI ethics for years, sees Hochman’s “evidence” as irrelevant: “Put simply, these are anecdotal examples,” says Michel. Since the systems are also open-ended, one can pick out anecdotal cases where the system doesn’t work the way one would like it to. You can make it work in a way that validates what you think you know about the system.”

See also:




Processor Cpu Research Chip Science Ai Artificial Intelligence Stock Photos AI Artificial Intelligence scientist Science Brain Scientist Bot Head Binary Code Binary Thinking Brain Control Brain Wave Brain Chip Artificial Brain Brain Brain research cyberkinetic thought cyberkinetics

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply