Socket researchers discovered an active attack campaign and named it SANDWORM_MODE, similar to the «SANDWORM_*» environment variable settings built into the malware execution control logic. At least 19 packages containing typos and squats were published under various pseudonyms, posing as popular developer utilities and AI-related tools, xrust writes. Once installed, these packages go to work collecting secrets from local environments and CI systems, and then use the stolen tokens to modify other repositories.
The malware implements a Shai-Hulud-style «switch» that remains off by default to trigger a cleanup of the home directory when malware is detected. Researchers called the campaign a «real and high-risk» threat, advising cybersecurity professionals to view these packages as active risks of compromise.
Typo in the word takeover
The campaign begins with typosquatting, when attackers publish packages with names almost identical to legitimate ones, counting on a typo by the developer or on the fact that artificial intelligence will not identify incorrect dependencies.
The typo vulnerability targets several popular developer utilities in the Node.js ecosystem, cryptography tools, and, perhaps most notably, AI programming tools that are rapidly gaining popularity:
— three packages imitate Claude Code;
— one targets OpenClaw, an AI virus agent that recently surpassed 210 thousand stars on GitHub.
After installing and running the package, the malware searches for sensitive credentials, including npm and GitHub tokens, environment secrets, and cloud keys. These credentials are then used to propagate malicious changes to other repositories and introduce new dependencies or workflows, extending the infection chain.
Additionally, the campaign uses a modified GitHub Action that could potentially amplify the attack inside CI pipelines by extracting secrets during builds and enabling further distribution.
Infecting the AI developer interface
This campaign was specifically noted for its direct attack on artificial intelligence-based automatic programming systems. The malware deploys a malicious Model Context Protocol (MCP) server and injects it into the configurations of popular AI tools, injecting itself into the system environment as a trusted component.
Once this goal is achieved, hint injection techniques can trick the AI into accessing sensitive local data, such as SSH keys or cloud service credentials, and handing it over to the attacker without the user's knowledge.
Xrust Socket programmers discovered an npm worm that attacks AI-based software
- Если Вам понравилась статья, рекомендуем почитать
- Google Chrome is finally coming to ARM64 Linux
- Amazon will require senior engineers to approve changes made using AI amid a wave of crashes.







