AI won’t replace journalists, but it will favour those leveraging its strengths
Digital Strategies Blog | 06 March 2025
AI has been a transformative force in the world of journalism, but its full potential is still largely untapped.
True Artificial Intelligence, or Artificial General Intelligence (AGI), is the ultimate goal — not the current large-language models (LLMs) often erroneously called AI. That said, I can’t deny the current form of AI is valuable. Sixty-five percent of industry leaders rely on it for content creation, distribution, and personalised recommendations.

Newsrooms also use AI to automate boring tasks like creating tags, categorising articles, or searching for related content.
With these and other offerings and applications, current AI is already propelling news organisations into a tech-driven future.
But keep in mind: This future isn’t solely about AI. While it may be integrated into many aspects of journalism, the human element remains crucial. After all, a truly tech-driven newsroom is one where journalists leverage AI to make their work more efficient.
Therefore, journalists must learn how to team up with AI. Those who can leverage its capabilities while still bringing their critical thinking, creativity, and ethical judgment to the table are the ones who will thrive.
Crafting news stories with AI: the new writing tool
One vital skill for a tech-driven newsroom is prompt engineering. The New York Institute of Technology defines prompt engineering as the process of designing and refining the inputs given to language models to achieve desired outputs. Whether you want to generate ideas for stories, craft outlines, create drafts, optimise content for different platforms, or carry out other tasks using AI, the ability to make AI produce what you need is essential.
But here’s the thing: Not all journalists need this skill.
Prompt engineering is only necessary for newsrooms that directly work with generic, mainstream LLMs like Gemini, GPT-4, and Claude. However, in my experience as the co-founder of an agency serving publishers, I’ve learned most newsrooms frown upon these models due to privacy and data protection concerns.
Instead, they rely on their own self-hosted AI models, typically embedded into their content management systems (CMS). The CMS gives journalists access to the AI models but eliminates the need for them to engineer any prompts.
Sooner or later, every CMS for publishers will have vertical integration with one or multiple LLMs. These LLMs will seamlessly automate mundane tasks like optimising titles for different distribution channels, optimising text for SEO, generating podcasts from articles, and fixing spelling.
Inevitability of AI-driven automation of certain manual tasks
AI is fast taking over routine tasks in media companies and newsrooms. A 2024 study by Alem Febri Sonni, showed 73% of news organisations use AI for writing news, 68% for analysing data, and 62% for personalising content.
This increasing reliance on AI comes at a time when the media faces significant challenges, including rampant misinformation, deep-seated biases, and the overwhelming influence of social media personalities.
By leaving routine tasks to AI, journalists will have more time to do the things that matter most, like diving into complex issues, fact checking claims, and prioritising verified sources of information.
Ethical oversight: journalism’s human element
The use of AI in journalism has led to the rise of several ethical issues.
The first, according to research on AI in journalism by Omar Abdallah, is data bias. AI, by design, often generates content reflecting the bias of its creators.
There’s also the problem of bogus information. AI tools, while powerful, often hallucinate, and generate convincing but entirely fabricated content.
These ethical concerns reveal a major duty for journalists: AI must be supervised. Journalists need to fact check whatever is generated by AI for accuracy and authenticity. Also, having a human supervise and modify the output of AI in newsrooms eliminates data bias.
Challenges and opportunities in building AI-ready journalists
Per the findings of Mathias Felipe de-Lima-Santos, the major challenge to AI in journalism is that journalists simply don’t know how LLMs work. What’s more, the fear of being replaced makes most journalists resistant to the AI shift.
One solution to these problems is training. By promoting AI-focused training programmes in newsrooms, journalists may come to realise AI is just a tool to improve their work, not replace them.
The most human-friendly solution I’ve found is integrating the AI into the CMS without explicitly highlighting its presence. This way, journalists can leverage it without being aware of the intricate workings behind it.
Preparing for the future of journalism
In a world rife with misinformation, the demand for authentic news will continue to grow — and so will the demand for news updates as they happen.
The best way to meet these demands without burnout is to merge AI and human journalism. The future of journalism is one where AI isn’t a threat but an amplifier, allowing smaller newsrooms to become truly multi-channel and address larger public and age groups.
Embracing AI with confidence
The AI-cum-publishing market will be worth more than US$40 billion by 2033. To secure a piece of the pie, you’ll need to brush up your AI skills.
Don’t misunderstand: AI isn’t going to replace journalists. Rather, it’ll be working with those who are sufficiently skilled in leveraging its capabilities.
My advice? Embrace it and understand how it works.