OpenAI spends 5000 MGT to hire KOLs to film "China's AI Threatening Personal Data," attempting to influence AI regulation in 2026

WIRED weekend published investigative report reveals that OpenAI co-founder Greg Brockman, Palantir co-founder Joe Lonsdale, and Silicon Valley executives from a16z are hiring TikTok creators through a project under a super PAC called “Leading the Future,” paying each video $5,000 to produce documentaries following the script of the “Chinese AI Threat Theory”—with creators explicitly instructed not to disclose the true sponsors. This public opinion manipulation operates in two stages, ultimately aiming to influence AI regulation legislation before the 2026 U.S. midterm elections, preventing the tech industry from being held responsible for data scraping.

(Background summary: OpenAI counters Musk’s hostile takeover! Plans to grant non-profit board “special voting rights” to prevent hostile acquisitions)

(Additional context: Microsoft and OpenAI collaboration hits a snag? The Wall Street Journal: widening rift between both parties)

Table of Contents

Toggle

  • Three-layer structure: PAC, non-profit, marketing agency
  • A two-stage script that disguises true intentions
  • OpenAI: “Personal actions, unrelated to the company”
  • Why KOLs, why now

A 24-year-old beauty KOL is unboxing skincare products on camera when she suddenly shifts tone: “China will steal your and your child’s personal data to win the AI race.” The video is marked #ad, but she does not disclose who the actual sponsor is. WIRED’s in-depth investigation released over the weekend reveals that the money paying her comes from some of Silicon Valley’s top AI executives.

Three-layer structure: PAC, non-profit, marketing agency

WIRED reporters point out that the structure of this public opinion operation is deliberately designed to be complex and hard to trace. The marketing agency SM4 directly contacts creators, negotiates a fee of $5,000 per video, and delivers scripts written by the non-profit organization Build American AI. Build American AI is a project under the super PAC “Leading the Future”—which has raised $140 million, with $51 million still available.

The funding list includes: OpenAI co-founder Greg Brockman (including donations from his wife in a personal capacity), Palantir co-founder Joe Lonsdale, venture capital giant Andreessen Horowitz (a16z), and AI search startup Perplexity.

Instructions to creators are clear and straightforward: do not tag Build American AI, do not mention who paid, just follow the script.

A two-stage script that disguises true intentions

WIRED’s investigation describes this operation as progressing in two phases. Phase 1 focuses on “American AI innovation” with warm visuals and positive tone, aiming to generate goodwill around AI topics. Phase 2—currently underway—shifts to a “Chinese threat” fear narrative, centered on three themes: Beijing stealing personal data, taking American jobs, and endangering the next generation.

Of particular note is the origin of this exposure: WIRED reporters were able to reveal this because the other side approactively approached them, inviting them to participate in this operation.

OpenAI: “Personal actions, unrelated to the company”

An OpenAI spokesperson stated that the company has “no corporate connection” with Build American AI and Leading the Future, and has not provided funding or support.

Legally, this statement holds water—Brockman indeed donated as an individual, not on behalf of OpenAI. But critics point out that this is the essence of “dark money” operations: using personal identities to cut out the organization, obscuring financial flows through complex structures, leaving responsibility always in a legal gray area.

Drop Site News’s concurrent report cites researchers comparing this tactic to 1950s American tobacco companies enlisting doctors to endorse smoking as “harmless”—first packaging messages with trusted faces, then using complex financial structures to blur accountability.

Why KOLs, why now

The timing is no coincidence. According to WIRED’s cited data, 53% of American adults get news from social media, and among those under 30, 38% frequently rely on KOLs as information sources. Traditional media trust continues to decline, and KOLs’ “authenticity” fills this gap—yet when the scripts are written by PACs and paid for by tech executives, this “authenticity” becomes a carefully crafted illusion.

The political goal of this operation is also clear: before the 2026 U.S. midterm elections, shape public opinion and suppress support for AI regulation, thereby influencing congressional legislation—especially provisions that could hold tech companies accountable for large-scale data scraping.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin