Meta helped build China’s DeepSeek: Whistleblower testimony

Former Meta executive Sarah Wynn-Williams is set to testify before the US Senate Judiciary Committee on Wednesday, revealing how the company’s AI model, Llama, played a critical role in accelerating China’s AI capabilities — particularly contributing to the rise of DeepSeek.

Actively endorsed by Chinese officials, DeepSeek has emerged as a formidable rival to OpenAI, with its launch costing just $6 million — a fraction of what most large language models require. Its emergence has created ripples across the world with many seeing it as an affordable alternative to AI models from OpenAI and Meta.

Citing “Project Aldrin” as Meta’s secret mission to get into China, Wynn-William’s written testimony stated, “Meta built a physical pipeline connecting the United States and China. Meta executives ignored warnings that this would provide backdoor access to the Chinese Communist Party, allowing them to intercept the personal data and private messages of American citizens. The only reason China does not currently have access to US user data through this pipeline is because Congress stepped in.”

She also claimed that “Meta’s AI model – Llama – has contributed significantly to Chinese advances in AI technologies like DeepSeek.”

According to the testimony, Meta started briefing the Chinese Communist Party as early as 2014, focusing on critical emerging technologies, including AI, the explicit goal being to help China outcompete US companies.

“There’s a straight line you can draw from these briefings to the recent revelations that China is developing AI models for military use, relying on Meta’s Llama model. Meta’s internal documents describe their sales pitch for why China should allow them in the market by quote ‘help[ing] China increase global influence and promote the China Dream,’” Williams’ testimony read.

AI Arms Race

These revelations surface at a tense moment in US-China relations, as Washington continues to impose export restrictions on advanced AI chips in a bid to curb China’s progress in developing next-generation generative AI models.

“The United States has introduced regulatory guardrails, including export controls and multilateral agreements, to prevent unintended AI knowledge transfer — particularly to strategic competitors like China. Yet enforcement remains a major challenge, as loopholes and evasion tactics limit effectiveness. The core challenge lies in balancing national security with the need to foster domestic innovation,” said Prabhu Ram, VP of Industry Research Group at CyberMedia Research.

The recent disclosures about Facebook allegedly aiding China in AI development could significantly undermine global efforts to safeguard sensitive AI technologies. If proven true, it may trigger fresh calls for stricter compliance, re-evaluation of public-private partnerships, and even new international AI norms.

Such a breach of trust could erode collaboration among democratic AI powers and give China a strategic edge in critical AI applications, including military and surveillance.

As this new development can put in more guardrails, restricting the development of global AI models, Ram added, “Overly broad restrictions could stifle US research and weaken its global AI leadership. Targeted, proportionate controls and stronger enforcement are needed.”

However, according to a Rest of World analysis, the US and China have been the most frequent partners in AI research over the past 10 years.   

Open-Source at a crossroads

Open-source models such as Llama give developers and companies the freedom to train, fine-tune, and run AI on their own infrastructure — enabling full control over performance, privacy, and cost. They also help avoid lock-in to closed vendors, making it easier to build secure, efficient, and future-ready AI systems tailored to specific needs.

While open source has significantly lowered the barriers to entry for AI innovation, experts believe that Meta’s Llama models have been central to this shift enabling a wide range of companies to build AI solutions. This openness also raises complex questions around ownership, accountability, and national security — especially when models are repurposed in jurisdictions with differing regulatory norms and strategic goals.

This new revelation of Meta playing a crucial role in the development of DeepSeek highlights the growing tension between openness and strategic control. “Emerging markets are likely to accelerate efforts to establish clearer AI governance frameworks — balancing local innovation with context-specific challenges. At the same time, they will seek to reduce risks of misuse and dependency on external foundations through proactive guardrails that ensure responsible oversight and effective management,” Ram said.

Posts Similares

Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *