the tech company Goalwarned that it has detected an increase in malicious programs, better known as malware related with ChatGPTthe popular chatbot from Iartificial intelligence from OpenAI.

According to Facebook Matrix, since March, 10 families of malware and more than 1,000 malicious links posing as hacking tools have been discovered. ChatGPT.

In some cases, the malware included functions of ChatGPT along with malicious files, as stated by Guy Rosen, chief information security officer at Goalthis Wednesday.

“ChatGPT is the new cryptocurrency”: Rosen

In this statement Rosen assured that “ChatGPT is the new cryptocurrency” of cybercriminals.

Given this, they have assured that Goal it is already “preparing its defenses” for a variety of potential abuses related to generative AI technologies like ChatGPT, which can create human-like writing, music, and art.

In this same sense, United States legislators have already voiced their concerns about the use of AI generative in the propagation of disinformation campaigns on the Internet, since there is a fear that it will be used by “bad actors” to try to accelerate and expand their activities.

You may also be interested: What Android apps steal your information and are in the Play Store?

AI could be used to recruit terrorists

Malware is on the rise with ChatGPT, warns Meta

This fear is also shared by Jonathan Hall, a specialist lawyer and independent reviewer of legislation on terrorism in the United Kingdom, who a few weeks ago gave an interview to the British newspaper Daily Mail, where he stated that ChatGPTBard or other type of chatbot AI could easily be programmed, or even decide for themselves, to spread terrorist ideologies to extremists.

The specialist warned that if an extremist is prepared by an AI to carry out a terrorist attack, or if it is used AI to instigate one, it can be very difficult to prosecute someone legally, as anti-terrorism laws haven’t caught up with new technology.

“Attacks enabled by AI are probably right around the corner… I think it’s entirely conceivable that chatbots AI are programmed, or, worse yet, decide, to propagate a violent extremist ideology,” he said.

On the other hand, the expert in terrorism laws questioned who would have to be prosecuted in the event that the ChatGPT will encourage terrorism.

“Since the criminal law does not extend to robots, the culprit will get away with it. Nor does the law work reliably when responsibility is shared between man and machine,” he commented.

Follow us on Google news, Facebook and Twitter to keep you informed.

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply