Anthropic has publicly accused three major China-based AI companies—DeepSeek, Moonshot AI, and MiniMax—of systematically abusing access to its Claude model through large-scale “distillation” campaigns. Across both liberal and conservative coverage, reporters agree that Anthropic alleges these firms created tens of thousands of fake or proxy-linked accounts, routed through various intermediaries, and sent millions of prompts—on the order of at least 16 million queries—to harvest Claude’s outputs and then use those outputs to train or improve their own models. Both sides describe this as an “industrial-scale” or “large-scale” operation involving coordinated activity, note that Anthropic has grouped the firms together as part of a single pattern of behavior, and point out that this comes on the heels of similar warnings from OpenAI about comparable campaigns targeting its systems. Outlets across the spectrum also agree that Anthropic characterizes the effort as free-riding on its proprietary technology, says the activity has grown more persistent and sophisticated over time, and frames the alleged campaigns as violations of its terms of service that exploit weaknesses in access controls and account verification.

Coverage from both liberal and conservative sources also converges on the broader context in which these accusations sit: a fast-escalating contest over advanced AI between U.S. and Chinese firms, and a wider industry debate about how to handle model distillation and data scraping. They commonly explain that distillation is a known practice used to compress or replicate model behavior by training on another system’s outputs, but that companies like Anthropic and OpenAI increasingly argue it becomes theft when done covertly and at scale against proprietary systems. Articles across the spectrum situate the dispute within ongoing concerns about intellectual property, model safety, and geopolitical competition, including worries that such tactics could shift the competitive balance or even pose national security risks if advanced capabilities diffuse too quickly. There is broad agreement that the episode underscores the need for more robust technical safeguards, clearer legal frameworks, and possibly regulatory reforms to govern cross-border use of AI services and the reuse of model outputs for training.

Areas of disagreement

Framing of the threat. Liberal-leaning outlets tend to foreground concerns about systemic risks to the global AI ecosystem and national security, emphasizing how industrial-scale distillation could undermine safety, trust, and responsible model deployment. Conservative outlets, while acknowledging security concerns, more strongly highlight the idea of economic free-riding and unfair competition, casting the Chinese firms’ actions as opportunistic exploitation of U.S. innovation. Liberals more often place the case within a broader pattern of escalating AI arms-race dynamics, whereas conservatives frame it as another example of China taking advantage of Western openness.

Attribution and intent. Liberal coverage generally emphasizes the corporate behavior of DeepSeek, Moonshot AI, and MiniMax, focusing on how their alleged tactics fit into industry norms and gray areas around data use, and is somewhat more cautious about implying direct state orchestration. Conservative coverage is more inclined to imply or suggest that these private firms are closely aligned with, or at least operating in an environment shaped by, Chinese state objectives, thereby framing the distillation as part of a quasi-strategic campaign. As a result, liberal sources lean toward describing the conduct as aggressive competitive maneuvering, while conservative sources more readily portray it as intentional appropriation tied to a hostile geopolitical rival.

Policy and regulatory implications. Liberal-aligned outlets tend to stress the need for comprehensive regulation of AI data use, cross-border access, and safety safeguards, often invoking international cooperation and stronger global norms around model outputs and training data. Conservative outlets focus more on tightening U.S. protections, such as stricter export controls, access restrictions, and legal tools to deter or punish foreign entities that misuse American AI services. While liberals emphasize multilateral frameworks and industry standards, conservatives push narratives about reinforcing national defenses and reducing U.S. vulnerability to technological exploitation by China.

Industry practice and norms. Liberal coverage more explicitly acknowledges that distillation and training on other models’ outputs are widespread techniques, raising hard questions about where normal practice ends and theft begins, and suggesting that stricter, clearer industry norms are needed across all players. Conservative coverage is more likely to downplay that ambiguity and to draw a sharper line between Anthropic’s proprietary work and what it portrays as overtly parasitic behavior by Chinese competitors. Consequently, liberals present the dispute as exposing systemic ambiguities in current AI development practices, while conservatives present it chiefly as a case of foreign actors blatantly crossing an otherwise clear ethical and legal boundary.

In summary, liberal coverage tends to situate Anthropic’s accusations within a broader, structurally problematic AI ecosystem that needs stronger global norms and safeguards, while conservative coverage tends to stress Chinese firms’ opportunistic exploitation of U.S. technology and the need for tougher national protections and enforcement.

Made withNostr