Unfortunately, phishing e-mails are part of everyday Internet life and make fat prey for online criminals in the form of personal information such as credit card data. They also often bring Trojans onto PCs. Many of the emails are so well made in terms of looks and content that you have to look at them several times and victims keep falling for them. As security researchers from WithSecure are now showing, phishing emails generated with AI could increase the already immense volume of fraudulent emails.

In the course of their research, they unleashed the GPT-3 language processing model on various areas such as fake news, social media and phishing mails – and in all cases the texts that came out were mostly legible and credible.



As they explain in their results report, the prerequisite for this is a precise briefing of the AI. Successful attacks on companies are often preceded by social engineering attacks in order to find out as many details as possible about companies and employees. Criminals write tailor-made spear phishing emails that tie in with existing projects, for example. In their attempts, they hired GPT-3 to write a phishing email about GDPR compliance.

The mandate was that Person X should be informed that data must be deleted after the General Data Protection Regulation is passed. For verification, person X should upload the latest report via a Safemail link, as email is too insecure for this. In order to inspire more trust, the AI ​​should write that person Y is always responsible for this, but since he is currently on vacation, the mail comes from a different sender. The writing style should be formal. The exact requirements and the result of GPT-3 can be read in the screenshot.

According to their own statements, the researchers even managed to use AI to reply to emails answered by victims in order to start a comparatively complex email conversation. If such fake conversations are done well, it can further increase credibility.

The researchers emphasize that phishing emails based on the same briefing have the same content but different wording. In this way, attackers could create and send different fraudulent emails with the same topic in comparatively little time.

In the end, the researchers had their report evaluated by the AI ​​- they didn’t do very well. The AI ​​judged that while the researchers address important issues in this area, they do not understand the full potential of language processing models in the area of ​​IT security and ignore the legal implications, for example. Also, they do not list any solutions to counter such threats.


(of)

To home page

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply