It has been discovered that Russian cybercriminals are trying to circumvent the restrictions imposed on ChatGPT and use an advanced AI-based chatbot for their nefarious purposes.
Check Point Research (CPR) said it noticed many discussions in underground forums where hackers discussed various methods, including using stolen payment cards to pay for updated user accounts on OpenAI, bypassing geofencing restrictions, and using a “Russian semi-legal online SMS service” ” register ChatGPT.
ChatGPT is a new artificial intelligence (AI) chatbot that has made headlines for its versatility and ease of use. Cybersecurity researchers have already seen hackers use this tool to generate legitimate phishing emails as well as code for malicious, macro-laden Office files.
Paper Roadblocks
However, it is not easy to abuse this tool because OpenAI imposes a number of restrictions. Russian hackers, due to the invasion of Ukraine, have even more obstacles to overcome.
For Sergey Shkevich, Head of Threat Intelligence Group at Check Point Software Technologies, the obstacles aren’t good enough:
“It is not extremely difficult to bypass OpenAI’s country-specific restrictive measures to access ChatGPT. Right now, we see Russian hackers already discussing and exploring how to bypass geofencing to use ChatGPT for their malicious purposes.
We believe that these hackers are most likely trying to implement and test ChatGPT in their daily criminal operations. Cybercriminals are increasingly interested in ChatGPT because the AI technology behind it can make it more profitable for a hacker,” said Shkevich.
But hackers don’t just want to use ChatGPT – they are also trying to cash in on the growing popularity of a tool to spread all kinds of malware (opens in a new tab) and steal money. For example, Apple’s mobile app repository, the App Store, contained an app posing as a chatbot, but with a monthly subscription costing around $10. Other apps (some of which were also found on Google Play) charged as much as $15 for the “service”.