© APA – Austria Press Agency

Digitization State Secretary Florian Tursky (ÖVP) chose one Labeling obligation for artificial intelligence (AI) pronounced. According to Tursky, it is important to see at first glance when you are confronted with an AI. “In my view, innovation and the use of AI can only be widespread if there is full trust and transparency with regard to AI,” he emphasized at a panel on 4Gamechangers-Festival in Vienna.

As with the declaration of ingredients in food, people should know which algorithm they are dealing with, the State Secretary said. For this purpose, a national authority be created for the certification of high-risk applications. First of all, an office should be set up in the Ministry of Finance, which should take care of the development of competences, the preparations for national implementation and the legal framework. According to Tursky, this office will later become an authority. As reported, this should happen when the AI Act der EU comes into effect.

ChatGPT wants strict regulation because of fake news danger

On Tuesday, the head of the ChatGPT inventor OpenAI due to the risk of spreading false information with the help of artificial intelligence for a strict regulation pronounced. Due to the massive resources required alone, there will only be a few companies that can be pioneers in training AI models, said Sam Altman at a hearing in the US Senate in Washington on Tuesday. They would have to be under strict supervision.

Altman’s OpenAI triggered the current AI hype with the text machine ChatGPT and the software that can generate images based on text descriptions. ChatGPT formulates texts by estimating the likely continuation of a sentence word by word. One consequence of this procedure is that the software invents not only correct information but also completely incorrect information – but no difference is recognizable for the user. Because of this, there is a fear that their skills could be used, for example, to produce and spread misinformation. Altman also expressed this concern at the hearing.

Altman suggested starting one new government agency that can put AI models to the test. A series of security tests should be provided for artificial intelligence – for example, whether it could spread independently. Companies that do not comply with prescribed standards should have their license revoked. The AI ​​systems should also be checked by independent experts can become.

Altman acknowledged that AI technology could eliminate some jobs through automation in the future. At the same time, however, it has the potential to create “much better jobs”. During the hearing in a Senate subcommittee, Altmann did not rule out the possibility that OpenAI programs could also be available with advertising instead of as a subscription as is currently the case.

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply