Site icon California18

Artificial intelligence: China between hype and censorship

China’s Internet regulator wants to impose special rules on companies when introducing AI. The Cyberspace Administration of China (CAC) said on Tuesday that it supports the development and application of AI and encourages reliable software and data resources. But the AI ​​content would have to be in line with the country’s core socialist values.

Vendors will be responsible for the legitimacy of the data used to train generative AI, such as chatbots and image generators, and steps should be taken to prevent discrimination in the development of algorithms and training data, it said.

The regulator also said service providers must require users to provide their real identities and related information. Providers face fines, suspension of their services and criminal investigations if they fail to comply. The regulations aimed at the “healthy development and standard application of generative artificial intelligence”.

Race for technological leadership

A number of Chinese tech giants like Baidu, SenseTime and Alibaba have unveiled new AI models in recent weeks. Alibaba presented its AI software on Tuesday and announced that it would be integrated into all of the group’s apps in the near future. The technology “will bring about major changes in the way we produce, work and live our lives,” said CEO Daniel Zhang.

Dubbed Tongyi Qianwen (“Truth of a Thousand Questions”), the AI ​​language model will first be integrated into DingTalk, Alibaba’s workplace messaging app, and can be used to summarize meeting notes, write emails, and draft business ideas. It will also be used in future in Tmall Genie, Alibaba’s language assistant.

APA/AFP/Greg Baker

China is working flat out on ChatGPT alternatives, but censorship is standing in the way

China aims to become the world leader in artificial intelligence by 2030. The pioneer in China was the Baidu group, which, like Google, grew with its search engine and now focuses primarily on artificial intelligence, cloud computing and autonomous driving. CEO Robin Li admitted, however, that Bot Ernie was not yet perfect and that the US software ChatGPT had raised the bar again with its new version, ChatGPT-4.

Political questions taboo

The “Neue Zürcher Zeitung” recently wrote about the experiences (“NZZ”) of test users: “Some things Ernie can do better than ChatGPT, many things worse, also because of China’s censorship – that seems to be the prevailing consensus. (…) Ernie sometimes answers tight-lipped or not at all, especially when it comes to political questions.”

This is hardly surprising: China is a leader in regulating new technologies, some seen as potential threats to the stability or power of the Communist Party. After years of relative laxity, since 2020, authorities have cracked down on the practices of digital companies, particularly on issues related to personal information. China already tightly monitors its internet and media, with an army of censors deleting content that could reflect badly on the state or stir up unrest. Social networks are also strictly controlled.

For the past year, leaders have required internet giants to disclose their algorithms, which are usually a closely guarded secret. This January, China also tightened the framework for “deepfake,” digital image manipulations that are becoming increasingly realistic and pose a challenge to combating disinformation. An artificially generated photo of the Pope and a series of images of an alleged arrest of ex-US President Donald Trump recently caused a stir.

Warning voices are getting louder

After initial euphoria about the new technology, critical voices have recently increased in many countries, pointing to its potentially negative effects on security, jobs and education and calling for regulation – governments have already taken action in isolated cases. Italy temporarily banned ChatGPT last month. France’s data protection authority CNIL announced an audit on Tuesday at the request of the Reuters news agency. The Spanish sister authority AEPD has asked the EU authorities to take appropriate steps.

The US government is also taking the next step towards possible regulatory measures for software based on artificial intelligence. The IT authority NTIA is launching a public consultation on potential measures. The results should help develop policy recommendations, as NTIA boss Alan Davidson told the Wall Street Journal (“WSJ”).

The AI ​​programs are impressive even in their early stages of development, Davidson emphasized. “We know that we have to set some guidelines so that they are used responsibly.” Among other things, certification for software before it is made available is under discussion.

The US government has been considering possible regulation of artificial intelligence software for some time. At the beginning of the month, US President Joe Biden discussed the opportunities and risks of such programs with experts. When asked by journalists whether he believed AI to be dangerous, he said that we didn’t know that yet. “It could very well be her.”

Even Musk & Co. worried

Even important representatives of the tech industry like Elon Musk are becoming increasingly suspicious of the AI ​​hype. At least that is what a letter published at the end of March, which calls for an AI development break, refers to. The letter, published by the non-profit organization Future of Life, co-founded by Musk, speaks of a “runaway race” and a development that “nobody – not even their creators – now understand, predict or reliably control “ could. However, powerful AI systems should “only be developed when we are sure that their effects are positive and their risks manageable”.

However, that is not the case yet. The developers of the next AI generation are therefore asked to stop their work for at least six months. “This pause should be public, verifiable and involve all key stakeholders. If such a pause cannot be implemented quickly, governments should step in and impose a moratorium,” the letter said.

Exit mobile version