After social networks, emails and cryptocurrency wallets, now it’s time for user profiles on ChatGPT to become gold for cybercriminals. With the proliferation of applications involving artificial intelligence and, mainly, the integration of technology with corporate systems, credentials of this type assumed additional importance in this illegal market, with profiles being published to all or sold according to the character of importance of each one.

In the first case, there are mainly free accounts, which can give access to personal information according to searches carried out by users. The second is more profitable, with paid registrations on ChatGPT Plus allowing criminals to circumvent geographic restrictions, applied by the OpenAI organization, which publishes the technology, in some countries or opening the doors to corporate espionage if the registration has integration with internal systems.

According to information published by Check Point Research, the threat intelligence and research division of the cybersecurity company, a “lifetime upgrade” from a ChatGPT profile to its Plus version costs US$ 59.99 (about R$ 300) in the hands of cybercriminals. Officially, however, this value is US$ 20, approximately R$ 100.

Who wants, can pay even cheaper, with the bandits offering shared premium accounts for average values ​​of US$ 25, or about R$ 125. Payment is made using stolen credit cards, monthly, which makes that the blockage of a plastic is not an obstacle; the “service” offered by the bandits is precisely this exchange, guaranteeing “guaranteed satisfaction”.

According to the survey, there has been a significant increase in mentions of exploits involving ChatGPT accounts in recent weeks. Dark web forums remain the main mechanism for contacting and negotiating between crooks, as well as posting places about successful exploits or services linked to fraudulently paid accounts.

Brute force attacks and regional blocks

At the same time, a parallel market of exploits associated with ChatGPT was also created. In addition to account hijacking itself, criminals also sell brute-force tools, which allow testing leaked credentials against accounts registered with the AI ​​service. Even legitimate digital security testing tools are used for this purpose.

An example of this type is SilverBullet. Typically used to perform requests to a web application and perform tests involving brute force and data scraping, the software has also become an ally in ChatGPT account theft. A configuration file is sold, with a focus on large-scale profile theft and the promise of 50 to 200 checks per minute, in addition to proxy features that allow you to mask the connection and protect against blocking systems that prevent attacks of this type.

Check Point also cites regional restrictions as a major factor driving this market. While countries like China, Iran and Russia already have restrictions on the use of ChatGPT, the same also happened in Italy and can be applied in Germany. Thus, for crooks in these regions, using international paid accounts is a way forward with exploits.

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply