Plagiarism, discriminatory statements and fake news: Such problems occur again and again when using smart chatbots. There are plenty of examples of this on the web. ChatGPT, for example, produces texts praising the Chinese Communist Party on request and, if required, also provides the code for the distribution of the texts. He smuggles fictitious facts into factual explanations and interprets poems that don’t even exist.

So far, so well known. But the difficult question is: How do you prevent this? This question is becoming more and more urgent: In mid-March, Google put its AI chatbot Bard online, initially in the USA and Great Britain. In China, the search engine giant Baidu offers the chatbot Ernie. The influence of bots is growing fast.

“When using chatbots that are based on large language models, it is not possible to guarantee that answers are true to the facts. If such models are continuously trained, this can become a real danger, since fake news can then dominate the Internet at breakneck speed,” says Ute Schmid, Head of the Chair for Cognitive Systems at the University of Bamberg. Researchers all over the world are therefore looking for solutions on how to make chatbots better. To do this, they rely on cleaned training data, refined learning processes and new AI models.

To home page

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply