Attackers Exploit Bing AI to Distribute Crypto-Stealing Malware
Security researchers have identified a campaign where attackers are actively poisoning Microsoft's Bing AI search results to distribute malware that steals cryptocurrency. According to a warning from SlowMist's Chief Information Security Officer, 23pds, threat actors are promoting a counterfeit version of a program called 'OpenClaw'. Unsuspecting users who search for and download this fake software inadvertently install malware designed to siphon their digital assets and other sensitive information.
This attack vector represents a significant threat to retail crypto users who increasingly rely on AI-powered search tools for information and software discovery. The campaign specifically leverages the credibility of Bing's AI to lure victims, turning a trusted information source into a distribution channel for financial fraud.
Threat Is Part of Wider 'InstallFix' Malvertising Campaign
The Bing AI exploit is not an isolated event but a component of a larger, more sophisticated strategy known as the 'InstallFix' campaign. Security firm Push Security reports that threat actors are creating near-pixel-perfect clones of legitimate websites for popular developer and AI tools, such as Anthropic's Claude Code CLI. They then use malvertising, including sponsored results on Google Ads, to drive traffic to these fake pages.
Instead of providing legitimate installation scripts, the cloned sites trick users into running malicious commands that deploy infostealers like Amatera Stealer and Cuckoo. This multi-platform approach, which also includes hosting rogue installers on GitHub, demonstrates the attackers' ability to exploit user trust across various services, not just search engines. By abusing legitimate hosting platforms like Cloudflare Pages and Tencent EdgeOne, the attackers effectively blend their malicious traffic with normal web activity, complicating detection.
New Attack Vector Erodes Trust in AI Platforms
The systematic targeting of AI tools marks a new frontier in cybercrime, directly impacting user trust in emerging technologies. As users turn to AI assistants for research and recommendations, these platforms become high-value targets for social engineering. The success of such campaigns creates significant fear, uncertainty, and doubt (FUD) within the crypto community, potentially leading to reduced activity from users concerned about asset security.
The incidents place direct pressure on technology giants like Microsoft and Google to strengthen their defenses against search result poisoning and malicious advertising. For investors, it serves as a critical reminder to verify the source of all downloaded software and to be skeptical of installation commands, even when they appear to originate from a trusted platform.