AI in Elections Propaganda
Artificial intelligence has become a powerful tool in many industries, but it is also being misused for political manipulation. Reports have surfaced that AI models, including ChatGPT, have been used by groups attempting to influence elections worldwide. OpenAI, the creator of ChatGPT, recently revealed how various actors have tried to exploit its AI technology to create misleading content related to elections in the U.S., India, Rwanda, and the European Union.
One of the cases involved an Iranian-backed effort to create fake news websites in English. These websites were designed to appeal to different political groups in the U.S. and spread manipulated content. Social media posts supporting these websites were also generated using AI, though there is no clear evidence that they reached a large audience. U.S. intelligence agencies have confirmed that similar attempts have been made by groups linked to Russia and China. However, there has been no indication that these campaigns have successfully influenced voters on a large scale.
A separate campaign focused on Rwanda saw AI being used to generate posts in favor of the ruling political party. These messages were repeatedly shared on social media, with more than 650,000 identical posts flooding the platform X. OpenAI also blocked attempts to use AI-generated content in discussions about elections in the European Union and India, stopping them before they gained significant traction. However, it remains uncertain whether the groups behind these campaigns simply switched to other AI models.
Elections Cybersecurity Risks and AI Exploitation
Beyond election interference, AI is also being used for cyberattacks. One Iranian hacker group, known for targeting water and wastewater plants, attempted to use ChatGPT to aid their operations. According to OpenAI, this group tried to gather information on default passwords for industrial control systems. They hoped to gain unauthorized access to critical infrastructure. The hackers also searched for details about common internet routers in Jordan. They wanted to identify potential vulnerabilities. Additionally, they requested help with coding issues that could be useful for hacking.
This group, which had previously breached U.S. and Israeli water systems, was sanctioned earlier this year. While there is no evidence that their activities caused damage to American infrastructure, their ability to penetrate systems using default login credentials highlights the cybersecurity risks that still exist. The group has since gone dormant, but the possibility of similar hacking attempts remains a concern.
Additionally, hackers linked to China attempted to breach the personal and corporate email accounts of OpenAI employees. This was done through phishing attacks, where fake emails were sent in an attempt to trick individuals into revealing their login credentials. OpenAI confirmed that this attempt was unsuccessful, but it provides further proof that AI-related companies are being targeted by cybercriminals.
AI’s Role in Digital Manipulation
While AI has been widely used for positive purposes, its misuse in digital manipulation is becoming a growing issue. The use of AI-generated content in propaganda campaigns shows how technology can be weaponized for misinformation. However, experts believe that, so far, these AI-driven efforts have not significantly changed public opinion or led to major breakthroughs in influence operations.
Despite the presence of bad actors, OpenAI’s transparency in reporting such cases provides insight into how AI is being misused. The company’s ability to detect and shut down these attempts early has limited their impact. However, the challenge remains in preventing malicious actors from finding other ways to exploit AI tools for political and cybersecurity threats.