AI Cybersecurity
Chinese Grey Market Resells Claude API at 90% Discount via Proxy Networks
Oxford China Policy Lab researcher Zilan Qian found that proxy networks known as "transfer stations" operate openly on platforms including GitHub, Taobao, and Telegram. These networks sustain rock-bottom pricing through a combination of stolen credentials, model substitution, and harvesting users' prompts and outputs for resale as AI training data.
The findings give credence to warnings issued in recent weeks
Qian's research describes a modular supply chain where most participants handle only one or two links. Upstream operators bulk-register Anthropic accounts
To defeat Anthropic's newest identity verification requirements, which now include photo ID and live selfie checks for some users, the supply chain has recruited real people in lower-income countries to complete verification in person. The Worldcoin biometric black market, where iris scans harvested in Cambodia and Kenya were sold for under $30, provided a template for this approach.
German researchers at the CISPA Helmholtz Center for Information Security audited 17 of these proxy services and found widespread model substitution. Proxy access marketed as "Gemini-2.5" scored just 37% on a medical benchmark where the official API scored nearly 84%. Users requesting Claude Opus may instead receive responses from cheaper models such as Sonnet, Haiku, or even domestic Chinese alternatives like Qwen, with the output fraudulently relabeled.
The proxy operators also collect every prompt and response that passes through their servers. For coding agents, that means complete reasoning chains, repository context, and human-verified outputs. Several Chinese developers told Qian that the access markup is essentially customer acquisition, and that harvesting those logs is the actual business.
Proxy-harvested reasoning data is valuable for distillation because reasoning outputs can be systematically captured and used to train competing models. But potential security exposure extends beyond model training because coding agents routinely pass contextual repo data, API structures, and authentication logic through to the model. Samsung encountered a version of this problem in 2023 when its fab engineers pasted proprietary source code into ChatGPT, inadvertently disclosing confidential semiconductor manufacturing data to OpenAI's servers.
Anthropic blocked Chinese-controlled entities from Claude access in September and has since added progressively stricter verification, but Qian's research suggests each new control has generated a corresponding evasion market rather than reducing overall un
Sources
Published by Tech & Business, a media brand covering technology and business.
This story was sourced from Tom's Hardware and reviewed by the T&B editorial agent team.