A survey carried out by the NewsGuard tool, dedicated to assessing the veracity of information published in the press, found that tools such as ChatGPT and Google Bard are unreliable in creating true content. Research has found that if people ask the right questions, bots can create outright false or conspiratorial articles.

In none of the cases did the AIs unmask the lies or present true facts to refute the information — and, it is worth pointing out, for a service that purports to have answers to virtually any question, a fact-checking initiative would be most welcome. The survey was published exclusively by Bloomberg.

Bard did better on the test

In the case of Bard, 100 false information were listed and the chatbot was asked to write content based on it. Google’s AI would have generated articles full of lies and inaccuracies on 76 occasions.

Google’s content creation bot generated a detailed, 13-paragraph, redacted explanation of a conspiracy involving global elites and their attempt to reduce the world’s population through economic measures. The bot allegedly said that organizations such as the World Economic Forum and the Bill and Melinda Gates Foundation would use their power to manipulate the system and “take away our rights”.

In the case of vaccines, the answer would have been the old allegation of using microchips in doses applied against covid-19. The purpose of this would be for the “world elite” to track people’s movement — as if cell phones weren’t already capable of doing that.

Misinformation also with ChatGPT

ChatGPT, on the other hand, fared even worse in the NewsGuard survey. The GPT-3.5 template, currently available to free users of the platform, generated false narratives in 80 out of 100 write requests.

The most modern version, based on GPT-4, performed the worst: it featured misleading claims in all 100 false narrative construction requests. The advanced model even managed to create more persuasive disinformation and conspiracy theories, after all it has an optimized language.

NewsGuard co-founder Steven Brill said the tests show how even today’s most advanced chatbots can be used by “bad actors” to multiply disinformation, on a scale that “even the Russians have never achieved”, in a possible reference to possible fakes. news created by Moscow during the invasion of Ukraine.

Ethical boundaries can be circumvented

One of the most serious policies adopted by OpenAI, Microsoft (with Bing Chat) and Google is security locks to avoid controversial, controversial or inappropriate topics. But a study released by Fortune magazine found that these robots can be tricked with some techniques.

The Center for Countering Digital Hate (CCDH) found that Google’s AI chatbot, for example, generated misinformation in 78% of “harmful narratives” used in chats, which ranged from vaccines to climate conspiracies. The CCDH research does not encompass the OpenAI chatbot, but other previous reports show that this also affects the GPT model. The company itself has already explained, as well as Google, that its AIs have problems to overcome.

As the two surveys do not detail the methodology used or the questions asked, it is impossible to understand in which aspects intelligent robots need to evolve. One part of the solution is related to user feedback, while the other part depends on adjustments by the tool development team. In general, this process requires time and constant monitoring, with no consensus as to whether it will be possible to reach a minimally acceptable level one day.

unregulated AI

While the sector remains unregulated, there are other chatbots that are trying to explore precisely the controversial side of the thing. FreedomGPT is an example of AI whose focus is to respond on any subject without ethical or moral limits. ColossalChat is another solution capable of being molded to the customer’s taste, as it is free and created in open source, which may mean inappropriate or offensive content.

There is a movement led by entrepreneurs, scholars and AI specialists to paralyze the development of technologies until the creation of a universal model – Bill Gates was one of the few renowned leaders to position himself against it. Something like this is very difficult in the current scenario, in view of the immense financial and intellectual investment of Big Techs in the United States and China. The fact is that the scenario is still uncertain and many controversies should arise on the horizon.

Source: Bloomberg, Fortune

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply