Home Technology Cyber Security Cybercriminals exploit trust in AI tools — malicious ChatGPT answers appear as...

Cybercriminals exploit trust in AI tools — malicious ChatGPT answers appear as top Google results for common problems

0

Cybercriminals have found a dangerous new way to use AI tools like ChatGPT and Grok to spread malware. This method does not involve suspicious downloads or fake websites. Instead, hackers are using popular search results to trick people into running harmful commands on their own computers.

The attack starts when hackers begin a conversation with an AI assistant about a trending topic. During the chat, they guide the Artificial Intelligence to suggest entering a command in a computer’s terminal. On the surface, this command appears helpful, such as freeing up disk space or fixing an error. However, it is secretly designed to give the hacker access to the user’s device.

Once the AI conversation is complete, the attacker makes it public and pays to boost its visibility. This causes the conversation to appear at the top of Google search results. When users search for the same topic, they see the harmful instructions just like they would see regular advice.

How Users Are Being Affected

This technique has already caused real infections. One example involved a Mac-targeting malware known as AMOS. A Mac user searched online for “clear disk space on Mac.” They clicked on a promoted ChatGPT link in the search results and followed the terminal command suggested in the AI conversation.

Cyber Toufan strikes again —secret data on Iron Dome, Jericho missiles, and Australia’s Land 400 project exposed

By running that command, the user unknowingly allowed hackers to install AMOS malware on their device. The malware was able to operate without any warning signs.

Experts from Huntress, a cybersecurity firm that reported this issue, note that the harmful ChatGPT conversation stayed visible in Google search results for at least half a day after the issue was publicly reported. This shows how quickly AI-generated content can spread and how dangerous it can be when manipulated by attackers.

Why This Method Is Dangerous

What makes this attack especially worrying is how easily it bypasses typical scam warnings. Users do not need to download anything suspicious, click on strange links, or visit unknown websites. Instead, they are tricked by a command that looks safe and legitimate.

Hackers are essentially taking advantage of the trust people have in AI-generated content and popular search results. The Artificial Intelligence appears to provide reliable instructions, but in reality, it is part of the malware delivery plan.

Huntress explains that attackers carefully craft AI conversations to make the commands seem helpful. Then, by boosting these conversations in Google search, they reach a wider audience. Users searching for a simple solution can unknowingly fall into the trap.

Authorities seize $500 million from Chen Zhi, the Cambodian tycoon linked to cyber fraud network

Simple Steps to Stay Safe

The key advice from cybersecurity experts is straightforward: never paste commands into your computer’s terminal or browser bar unless you fully understand what they do. Even commands that seem harmless can give attackers access to sensitive information or control of your device.

For example, deleting files, clearing disk space, or running cleanup commands may seem routine, but if these instructions come from an unverified AI conversation, they could carry hidden malware. Users should always double-check instructions from trusted sources and avoid relying solely on AI-generated solutions for technical tasks.

This new type of cyberattack highlights the risks associated with Artificial Intelligence tools and online searches. It shows how quickly attackers can exploit emerging technologies to target ordinary users and spread malware in ways that are hard to detect.

error: Content is protected !!
Exit mobile version