Why AI fails when newsrooms need it most

By Parikshit Bhardwaj

New Delhi, India

Connect      

When Pakistan’s Dawn, one of South Asia’s oldest English newspapers, accidentally printed an AI prompt in its November 12, 2025, edition (“If you want, I can also create an even snappier front-page style version …”), the error was easy to mock.

Inside newsrooms, though, it didn’t land as a joke. It felt like a warning.

The incident didn’t happen because Dawn lacked talent or editorial standards. It happened because modern newsrooms are being pushed into an impossible equation: publish faster, with fewer people, while relying on tools that don’t understand consequences.

Small newsrooms being pushed to achieve more with AI tools can lead to mistakes.
Small newsrooms being pushed to achieve more with AI tools can lead to mistakes.

AI can churn out copy, all right. But it cannot shoulder the one thing journalism depends on: judgment.

Why AI breaks at the worst possible moment

AI’s strengths are essentially retrospective and pattern-based. It can summarise timelines, generate background explainers, and reorganise structured text. In short, it excels at yesterday.

But digital newsrooms don’t monetise yesterday. They monetise now.

In my years overseeing editorial strategy for Jagran New Media’s flagship youth and education Web site, our biggest traffic spikes didn’t come from evergreen explainers. They came from live, high-pressure news moments where information moved fast and verification mattered more than volume.

Those moments are exactly where AI performs the worst. The information is incomplete, context is fluid, and the signals AI relies on are often missing. Breaking news depends on sourcing, escalation, instinct, and sometimes the ability to pause when something feels “off.”

No model can replicate that pause — at least, not yet.

When speed meets AI without judgment

Two recent incidents illustrate the risk clearly.

At Dawn, a routine auto-industry report carried an entire AI prompt into print. The news company later confirmed the reporter had used AI in violation of editorial policy. This wasn’t a technical glitch; it was a workflow breakdown under pressure.

At the The Economic Times, an AI-assisted article about a fake Trump-Obama arrest video included a fabricated quote attributed to disinformation expert Nina Jankowicz. The quote never existed. Once published, the error was indexed, aggregated, and later resurfaced inside LLM outputs. AI didn’t just hallucinate; it had validated its own error.

In both cases, AI wasn’t malicious. It did exactly what it is programmed to do: produce plausible text quickly. The failure lay in assuming plausibility equals reliability when the stakes are high.

Why Indian newsrooms feel this more

India’s digital news economy stacks the odds against deliberation: thinning newsrooms, dependence on Google/Meta, shrinking display ad margins, low domestic eCPM, and readers who are shifting from traditional news to short-form spectacle.

AI raises hope, but journalism and AI run on opposite logics. Newsrooms prize verification, restraint, and accountability. AI prizes speed, fluency, and plausibility. Without guardrails, AI isn’t a helpful assistant; it’s a risk multiplier.

The real question isn’t whether newsrooms should use AI, but where it belongs and where it absolutely doesn’t.

In practical terms, AI works best when it supports low-risk tasks: backgrounding, restructuring, headline variants, SEO scaffolding, or reformatting already verified material. It becomes dangerous when inserted into live reporting, political coverage, or any workflow where judgment and consequence outweigh speed.

AI can serve as a powerful tool but it's no replacement for human judgment.
AI can serve as a powerful tool but it's no replacement for human judgment.

What this moment is really telling publishers

AI will become more deeply integrated into newsroom operations. That is inevitable. It can reduce production load, improve clarity, and free teams to do more meaningful work.

But the core of journalism, verification, sourcing, judgment, and consequence, must remain human-led because the cost of failure is loss of public trust.

And that is the thread connecting the Dawn prompt and the hallucinated ET quote. AI can assist production, but it cannot inherit responsibility.

That remains human.

About Parikshit Bhardwaj

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT