The real threat AI poses to journalism isn’t deepfakes — it’s DeepSeek

By Josh Brandau 

Nota

Los Angeles, California, United States

Connect      

Public anxiety about Artificial Intelligence in 2025 misses the mark. Deepfakes grab headlines, but journalism faces a quieter threat.

DeepSeek, developed in China, represents a sophisticated evolution in information control that reshapes global narratives.

Traditional censorship blocks content — DeepSeek transforms it

Released as open-source software, the model gained rapid adoption across global markets. Its censorship operates with calculated precision.

When users ask about the Uighurs, Taiwan, or Tiananmen Square, DeepSeek first presents state-approved narratives before erasing them entirely by responding, “Sorry, that’s beyond my current scope.”

DeepSeek is a prime example of how AI can not only rewrite facts, but erase them entirely.
DeepSeek is a prime example of how AI can not only rewrite facts, but erase them entirely.

This pattern builds on years of AI-driven information control.

China’s ERNIE Bot (owed by Baidu) also used this approach in 2023, systematically deflecting questions about sensitive topics. ByteDance’s content algorithms shape information flow on TikTok (and its popular chatbot Doubao). Western companies maintain their own AI models with undisclosed biases.

The 2024 U.S. elections revealed these risks

According to the Microsoft Threat Analysis Center, Chinese influence operations targeted down-ballot Republican candidates who advocated for anti-Chinese policies, including campaigns against U.S. Representative Barry Moore, U.S. Senator Marsha Blackburn, and U.S. Senator Marco Rubio. Actors spread antisemitic messages and amplified corruption accusations. Truth eroded through accumulated small changes to the information landscape.

DeepSeek signals a shift in scale and sophistication. Unlike previous models that simply blocked content, it actively reshapes narratives. Its rapid adoption demonstrates how quickly biased AI systems can penetrate global information networks. Each query processed through these systems subtly alters how readers understand events, creating distinct information realities across different regions.

The journalism community lacks protection against these changes

Professional standards for AI development remain undefined. Existing mandates face constant challenges. Few mechanisms exist to test for bias or enforce transparency about AI’s role in news production.

Without safeguards, the foundation of factual reporting weakens.

This represents a new frontier of information control. Future models will hide their biases more skillfully than DeepSeek. The success of embedding geo-political agendas into widely adopted AI tools will inspire other nations to develop their own narrative-shaping systems.

When every company becomes a media company through AI-driven content generation, maintaining human oversight becomes crucial for preserving a free and informed society.

About Josh Brandau 

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT