OpenAI has successfully blocked multiple North Korean hacking groups from misusing its ChatGPT platform to aid cyberattacks. The company disclosed these findings in its February 2025 threat intelligence report, highlighting the increasing attempts by state-sponsored hackers to manipulate artificial intelligence (AI) tools for malicious purposes.
The blocked accounts were linked to well-known North Korean hacking groups, including VELVET CHOLLIMA and STARDUST CHOLLIMA. These groups, notorious for their advanced cyber skills, have strong ties to North Korea’s government and have been involved in hacking operations worldwide.
How North Korean Hackers Tried to Use ChatGPT
With the help of an industry partner, OpenAI detected that these hackers were using ChatGPT for various cybercrime-related activities, including:
- Researching Cyber Tools and Techniques: The hackers used ChatGPT to learn about cyberattack methods, such as Remote Administration Tools (RAT) and brute force attacks on Remote Desktop Protocol (RDP), which allow unauthorized access to computers.
- Writing and Fixing Code for Hacking: They asked ChatGPT for help in developing, debugging, and troubleshooting harmful software. This included C#-based RDP clients and PowerShell scripts used for uploading, downloading, and executing malicious files.
- Crafting Phishing Emails: The groups used ChatGPT to create convincing phishing emails to trick people into giving away sensitive information. Their primary targets were cryptocurrency investors and traders.
- Hiding Malicious Code: The hackers sought assistance in making harmful programs harder to detect. They asked ChatGPT for ways to bypass security warnings and make their code look harmless.
- Finding Software Vulnerabilities: The groups researched weaknesses in applications and explored attack methods for macOS systems.
In addition to these activities, OpenAI’s analysts discovered new URLs hosting harmful files. These URLs were unknown to security companies before, but OpenAI shared the information with cybersecurity firms, helping them block potential threats.
OpenAI Uncovers a North Korean IT Worker Scheme
During their investigation, OpenAI also identified accounts linked to a North Korean IT worker scheme. This involved North Korean workers pretending to be job applicants to get hired by Western companies. Once employed, they used ChatGPT to help them complete job tasks, such as writing code, fixing software issues, and communicating with team members.
These workers also used AI to create believable cover stories to hide their true identities. They developed excuses to explain suspicious behavior, such as refusing video calls, logging in from unknown locations, or working during unusual hours. The ultimate goal of this operation was to generate income for North Korea’s government, which has been known to use cybercrime as a major source of funding.
OpenAI’s Efforts Against Other State-Sponsored Cyber Threats
OpenAI’s security measures go beyond North Korean hackers. Since October 2024, the company has disrupted multiple cyber campaigns, including those from China and Iran. Some of the campaigns uncovered include:
- “Peer Review” Campaign: This operation used ChatGPT to develop tools for a large-scale surveillance project.
- “Sponsored Discontent” Campaign: Hackers created anti-American, Spanish-language articles to manipulate public opinion.
In October 2024, OpenAI reported that it had blocked more than twenty cyber operations linked to Iranian and Chinese state-sponsored hackers. These activities ranged from cyberattacks to hidden influence campaigns aimed at spreading misinformation.
OpenAI’s Commitment to Security
OpenAI has made it clear that it is dedicated to preventing the misuse of its AI tools. The company has advanced security measures in place to detect and block malicious activities. It also collaborates with other cybersecurity firms to share critical intelligence, helping to prevent cyberattacks before they happen.
In its latest report, OpenAI stated, “We banned accounts demonstrating activity potentially associated with publicly reported DPRK-affiliated threat actors.” The company continues to actively monitor and combat threats to keep its AI platform safe.
This case highlights how AI tools can be used for both good and bad. While they provide incredible benefits, they can also be exploited by cybercriminals. OpenAI’s ongoing efforts to fight against such misuse show the importance of global collaboration between tech companies, cybersecurity experts, and governments in protecting digital security.