• Scams created with ChatGPT are hard to detect
  • There are always ways to spot them
  • A simple rule often avoids the worst

In the past, there was a very easy way to spot an online scam. Indeed, the message of your interlocutor was very often full of spelling mistakes, or even had neither head nor tail. With tools such as ChatGPT and their rivals, cybercriminals now have a formidable ally in generating compelling texts. In this article, we will see that there are fortunately some simple ways to anticipate the danger.

Messages that are too good to be true

If you are told that you are the lucky winner of a competition, or that you are dangled with an extremely advantageous job offer, you must put yourself on the alert. The danger is greater with ChatGPT because the AI ​​will be able to generate believable text inspired by real examples.

You can in any case look at the e-mail address of your interlocutor and check if it is legitimate. It is also possible to search for their name to determine if they are a real person.

Do not give out sensitive information

If cybercriminals skillfully exploit ChatGPT, the text might ask you to reveal your personal details, banking details, passwords, etc. As always, one must ask why an organization or individual needs this data.

In general, no serious company, or administration will ask you to provide such information without a valid reason, and this is rarely done by e-mail. Also, think twice before opening any attachments or clicking on malicious links. These can indeed contain malware.

Remember this simple rule

Faced with scams carried out via ChatGPT, you must remember this simple rule at all costs: never rush. Skillful crooks can indeed use these AIs to manipulate us and instill a sense of urgency. However, it is in these moments that we are most prone to commit blunders. Ask yourself. Ask yourself why this interlocutor insists so much that you click on this link or that you provide him with such or such information.

Interestingly enough, the threat of ChatGPT and language models is starting to alert law enforcement. Recently, Europol confirmed these fears: “The potential exploitation of these types of AI systems by criminals offers bleak prospects. Its ability to write very realistic texts makes it a useful tool for phishing purposes”.

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply