News companies succeed with AI when they bring their people along the journey

By Dawn McMullan

Assisted by ChatGPT

Dallas, Texas, United States

Connect         

When Holtzbrinck’s Katharina Neubert spoke in San Francisco, her message resonated far beyond technology. AI, the senior vice president/strategy and investments, reminded the INMA audience, is not an engineering project but a human journey.

Success depends on how teams feel about the change — whether they see AI as a threat or an invitation to grow. 

Across the week’s sessions during INMA’s Media Tech & AI Week, that sentiment echoed through every discussion: The biggest barrier to AI adoption in newsrooms is not capability but culture.

Why culture matters

AI is accelerating faster than most organisations can absorb. Yet as speakers repeatedly emphasised, technology outpaces psychology. Neubert described how early experiments at Holtzbrinck revealed the anxiety many staff felt when AI arrived — fears of redundancy, loss of creative control, or a future defined by machines rather than journalists.

Her framework for transformation began not with software but with safety: psychological safety to ask naïve questions, challenge assumptions, and experiment in public. Without that foundation, she said, no amount of technical training can overcome scepticism. 

Holtzbrinck’s approach pairs small pilot projects with transparent communication and visible leadership support. Each experiment is framed as a learning exercise, not a performance test. The goal is to normalise curiosity — and failure — as part of progress.

Hearst's Derrick Ho, editorial director/AI (left), and Tim O’Rourke, vice president/editorial innovation and AI strategy, talk with INMA study tour attendees.
Hearst's Derrick Ho, editorial director/AI (left), and Tim O’Rourke, vice president/editorial innovation and AI strategy, talk with INMA study tour attendees.

That theme resurfaced throughout the week. At Hearst, Tim O’Rourke and Derrick Ho described a similar philosophy: AI rollouts succeed when product, tech, and editorial teams work side by side and share their results openly. “Innovation isn’t the problem,” one speaker noted. “It’s integration.” 

Frameworks that work

Neubert’s framework breaks the process into three layers: clarity, competence, and confidence.

  • Clarity comes from explaining why AI matters to the organisation’s mission and what problems it solves.

  • Competence comes from training — but training rooted in real tasks, not abstract theory.

  • Confidence comes from repetition and shared success stories, which convert curiosity into commitment.

At Hearst, the same iterative pattern appears in practice. Its editorial innovation group runs cross-team projects that combine engineers, reporters, and designers from multiple titles. Every pilot is documented, reviewed, and scaled only when the benefits are clear. The company’s mantra, as O’Rourke explained, is to “experiment responsibly” — balancing enthusiasm with ethical guardrails and editorial integrity.

Otter founder and CEO Sam Liang talks with study tour participants.
Otter founder and CEO Sam Liang talks with study tour participants.

Even at technology partners, this structured experimentation mindset was evident. At Otter.ai, founder Sam Liang spoke about the “agentic era,” in which AI acts as a collaborator rather than a replacement. His team encourages employees to treat AI assistants as teammates whose suggestions need feedback. The company’s own internal practice — encouraging everyone to test the tools they build — reinforces that confidence loop Neubert described.

Bridging skills and trust

Trust, several speakers said, cannot be mandated. It must be earned through transparency. That begins with demystifying AI’s limitations as much as its strengths. 

The study tour visit to CTGT highlighted the importance of transparency and auditability. Founder Cyril Gorlla’s work on model “anchors” showed how newsroom leaders can evaluate what data an AI system depends on — a practice that reassures journalists their integrity is being protected.

LinkedIn’s Catherine Taibi extended the idea: Staff buy-in grows when they can see how AI decisions are made. By opening dashboards and showing journalists what drives recommendations or ranking, LinkedIn has increased understanding of AI systems across its own teams. During her presentation, Neubert said every new leader who is hired must bring tech literacy to the role.

For news organisations, similar openness — explaining why an algorithm surfaces one story over another — helps maintain editorial trust.

At Microsoft, Aparna Lakshmi Ratan described the same principle at enterprise scale. For her teams at MSN, every new AI feature launches only after an internal review focused on transparency, fairness, and explainability.

For news publishers, that means training not just developers but editors, marketers, and commercial leads to understand what AI can and cannot do.

The human multiplier

INMA’s The Debrief podcast, which closed the week, returned repeatedly to one truth: AI transformation is as much about mindset as machinery.

Speakers observed that organisations with flatter hierarchies and a tolerance for ambiguity are adapting fastest. They share context early, invite feedback broadly, and make experimentation part of everyone’s job, not just the innovation team’s.

Neubert’s Holtzbrinck example stood out because it connects cultural change with measurable outcomes. By framing AI as an assistant that removes friction, not control, staff participation increased dramatically. Employees began proposing their own AI use cases — from summarising research papers to refining marketing copy — and the technology shifted from novelty to necessity.

Katharina Neubert, senior vice president/strategy and investments at Holtzbrinck, speaking at INMA Media Tech & AI Week.
Katharina Neubert, senior vice president/strategy and investments at Holtzbrinck, speaking at INMA Media Tech & AI Week.

Hearst has seen similar results. After initial resistance, journalists who once feared automation began using AI to streamline mundane tasks like headline testing and image selection, freeing them to focus on reporting and storytelling. As one executive summarised, “When people experience benefit, belief follows.”

Leadership lessons

For leaders, the San Francisco discussions distilled several shared lessons:

  1. Lead with empathy, not urgency. Executives must acknowledge the discomfort of change. AI enthusiasm without reassurance breeds scepticism.

  2. Start small, scale slowly. Pilot programmes build credibility when they solve real problems, not hypothetical ones.

  3. Make learning visible. Publicly sharing successes and setbacks turns individual progress into institutional knowledge.

  4. Connect AI to purpose. Staff are more likely to engage when AI clearly supports journalism’s mission rather than abstract efficiency goals.

These lessons align closely with Neubert’s framework — practical, human, and deeply aligned with values.

From fear to fluency

A recurring takeaway across the week was that fluency beats fear. Every organisation represented — from the Chronicle’s local newsroom to global technology platforms — has faced internal resistance. Yet each found that consistent exposure, supportive training, and clear communication eventually shift sentiment.

At Hearst, this shift was visible once staff realised that AI tools could flag potential errors faster or surface stories they might otherwise miss. At Holtzbrinck, teams began integrating AI into editorial planning after discovering that generative models could synthesise research material in seconds. The key was positioning AI not as an overseer but as a colleague.

Action points for leaders

Executives at Media Tech & AI Week left with tangible guidance for managing people through transformation:

  • Create space for experimentation. Formalise “sandbox” environments where employees can test tools without risk.

  • Invest in cross-functional literacy. Build teams that mix editorial, product, and data expertise to bridge silos.

  • Reward learning behaviours. Recognise curiosity and iteration, not just output.

  • Communicate continually. Regular updates from leadership sustain trust and reinforce progress.

A shared journey

The closing conversations in San Francisco underscored that no company — not even the tech giants — has solved this entirely. Everyone is still learning. But the most successful publishers are those treating AI adoption as a shared journey with their staff, not an edict from the top.

As Neubert summarised in her framework, transformation happens “with people, not to them.” When organisations create cultures of learning, transparency, and inclusion, AI becomes less of a disruption and more of a multiplier for human potential.

About Dawn McMullan

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT