Site icon California18

ChatGPT and its dark side: Artificial Intelligence can do things that maybe it shouldn’t

ChatGPT of OpenAI and other Artificial Intelligence (AI) systems have become a matter of obsession for a good part of network users in recent months. And it is that these platforms seem to simply have no brakes.

We have seen how this language model has evolved to such an advanced and aggressive degree that little by little it gives the impression that it could be closer to becoming a system that is almost self-aware and therefore would not require a Direct instruction to think and activate.

We have already discussed the case of Steve Wozniak, Elon Musk and other specialists who have called for putting a brake on the feeding of these systems to focus on creating security measures and preventative regulatory locks in case something bad happens with AI.

But frankly at this point it does not seem that this is going to happen, meanwhile users continue to use the OpenAI platform, to find some uses that may not be so appropriate, proper or ethical.

The dark side of ChatGPT: all this can be done although maybe it shouldn’t

ChatGPT’s conversational Artificial Intelligence can be a great work tool and is capable of increasing anyone’s productivity exponentially.

Yes, many things can be done with this system, but this AI can still be used for malicious purposes. Here are some of the examples of what ChatGPT has done or can do, but shouldn’t:

  • Malicious or defective code can be programmed, taking advantage of its ability to develop games, extensions and applications, which gives rise to the use of vulnerabilities or the development of pieces that facilitate the malware access to steal data.
  • ChatGPT can be complicit in email or social media scams, generating highly convincing and personalized messages to trick victims out of their money or personal information.
  • Artificial Intelligence can be exploited to generate offensive or simply false content. Insults, threats, fake news or deepfakes are easier than ever because of ChatGPT and it is now very difficult to tell what is fake from what is real, as happened recently with the Pope’s fashionable clothes.
  • At a more “harmless” level, AI can be used to perform tasks that each subject should do individually, such as writing homework, essays or school work, helping students to fulfill their obligations in a fraction of the time. But this strategy also encourages plagiarism and leads to a lack of real learning.

In the end ChatGPT is a brutal tool. But in reality its development, feeding and use is zero transparent. Which makes it dangerous in the wrong hands.

Exit mobile version