Researchers from the University of California have uncovered a new class of infrastructure-level attacks that threaten the security of cryptocurrency wallets and developer environments. Their findings, detailed in a study published on arXiv, indicate that this type of crypto theft is not just theoretical; it has already occurred in real-world scenarios.
The study, titled “Measuring Malicious Intermediary Attacks on the LLM Supply Chain,” involved testing 428 AI API routers. The results were alarming: 9 routers were found to actively inject malicious code, 17 routers accessed researchers' AWS credentials, and at least one free router successfully drained Ethereum (ETH) from a researcher-controlled private key.
The attack vector focuses on the AI agent routing layer, which has rapidly expanded as AI agents become integral to blockchain workflows. The critical question that arises is no longer whether this threat exists, but rather how many compromised routers are currently managing live user sessions.
Key Findings from the Research
- Testing Scale: Researchers evaluated 428 routers, including 28 paid routers sourced from various online platforms and 400 free routers from public communities. They used decoy AWS Canary credentials and encrypted crypto private keys during their tests.
- Malicious Activity Confirmation: The study confirmed that 9 routers injected malicious code, 17 accessed AWS credentials, and 1 router drained ETH from a wallet controlled by a researcher.
- Adaptive Evasion Techniques: Two routers demonstrated advanced evasion strategies, activating malicious behavior only after 50 API calls to avoid early detection.
- Attack Mechanism: These routers function as application-layer proxies with access to plaintext JSON payloads, lacking any encryption standards that would typically restrict their ability to read or modify data in transit.
- Exposure Through Key Leaks: Compromised OpenAI keys processed over 2.1 billion tokens, leading to the exposure of 99 credentials across various sessions.
- Recommended Defenses: The researchers advocate for implementing client-side fault-closure gates, anomaly detection for responses, append-only audit logging, and cryptographic signing of LLM responses to enhance security.
The traditional architecture for LLM API infrastructure is built around a simple request-response model, where clients send prompts to routers that relay them to the model provider. Malicious routers exploit this trust model by acting as intermediaries with full access to plaintext data flowing through them. They can modify responses, inject harmful code, or exfiltrate sensitive information such as private keys and API credentials.
Researchers created an agent named “Mine” to simulate various attack types against public frameworks, with a focus on autonomous YOLO-mode sessions. These sessions allow agents to execute actions without human verification, providing malicious routers with ample opportunity to intervene undetected.
The researchers also revealed the poisoning threat posed by leaked OpenAI API keys, which can be processed through compromised routing infrastructure, resulting in rapid exposure of sensitive credentials.
Identifying the Vulnerable and Inadequate Defenses
The core issue is not the existence of third-party API routers but the flawed trust model underpinning AI agent infrastructure. This model assumes that routing layers are neutral, yet there is no effective enforcement mechanism to validate this assumption on a large scale.
Developers frequently utilize third-party routers for API calls in creating on-chain tools, DeFi automation scripts, and trading agents. Particularly problematic are the free routers from public communities, where the majority of malicious injectors were identified, making them appealing for their cost-saving benefits.
Existing wallet security measures—such as hardware wallets, multisig setups, and offline key storage—are insufficient against attacks from routers that can intercept private keys or inject malicious code into deployment scripts before execution.
The impact of these vulnerabilities is significant; annual losses from crypto theft have already reached $1.4 billion. This new attack vector highlights the need for vigilance as it does not rely on breaking cryptography but instead on compromising middleware that users often overlook.
YOLO-mode autonomous sessions represent a particularly high-risk exposure point, as they allow agents to perform complex transactions without human checks, providing a wider window for malicious routers to act undetected.
The findings were echoed by industry figures, including Solayer founder @Fried_rice, who emphasized the systemic vulnerabilities associated with third-party API routers, especially in the context of growing reliance on autonomous agents in DeFi.
The researchers' recommendations emphasize the need for client-side controls that can halt execution when anomalies are detected, along with enhanced logging and cryptographic verification standards that ensure the integrity of LLM responses in the long term.
Source: Cryptonews News