Agentic AI workflow may offer a solution for investigative journalism

By Sonali Verma

INMA

Toronto, Ontario, Canada

Connect      

I regularly get questions from INMA members on the use of agents, so I was intrigued to come across a fascinating report that asks the question: “ChatGPT hasn’t quite hit the mark as an investigative reporting assistant — could an agentic AI workflow offer a better solution?”

The researchers developed a prototype system that, when provided with a dataset and a description of its contents, generates a “tip sheet” — a list of newsworthy observations that may inspire further journalistic explorations of datasets.

This system employs three AI agents, emulating the roles of a data analyst, an investigative reporter, and a data editor. “Just as human journalism benefits from collaboration, AI could also advance through teamwork,” wrote the authors of the report, researchers Joris Veerbeek and Nick Diakopoulos.

Overall, these three agents collaborate through four stages:

  1. Question generation: First, a dataset and its description are provided to the reporter agent, which is tasked with brainstorming a set of questions (with the number adjustable) that could be answered using the data.

  2. Analytical planning: For each question, the analyst drafts an analytical plan detailing how the dataset can be used to answer the question. The editor provides feedback on the plan and the analyst redrafts.

  3. Execution and interpretation: Each analytical plan is executed and interpreted by the analyst. The editor and reporter provide feedback, which the analyst incorporates, and the reporter then summarises the final results in bullet points.

  4. Compilation and presentation: All bullet points from the previous step are compiled, and a subset of the most significant findings is presented to the user in the tip sheet.

Also interesting is how they work together: Throughout these stages, the agents don’t just passively use each other’s outputs as inputs but actively have to incorporate each other’s feedback, particularly during the analysis phase. 

For example, after the analyst completes its work in the third step, the reporter steps in to assess these findings. The reporter is then prompted to choose between three choices:

  1. Give a green light for “publication,” which signals the insight should be bulletproofed and potentially shared with the journalist supervising the agents.

  2. Suggest further analysis to try to develop other angles.

  3. Decide the findings aren’t newsworthy enough to pursue.

The team of agents was tested on five actual complex investigative data-journalism projects that had been nominated for awards.

“The results show that the process was surfacing leads with news potential which weren’t included in the original reporting,” the report said. “This means there is potential to inform avenues of investigation for new coverage.”

What is still not clear is how exactly the prompts, feedback loops, and knowledge bases contributed to the final outcome. 

“The system we’ve developed shows a lot of promise — it’s a tool that can help uncover valuable leads and provide new angles on complex stories,” the authors wrote. “But it’s also just that: a tool. The insights generated by these agents are a starting point, but the real work of journalism, the craft of telling a story that matters, remains firmly in human hands.”

If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.

About Sonali Verma

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT