Editors want to know whether an article they will publish is going to reach the largest possible relevant audience before they actually publish.
“Most editors have a dream,” Mariano Blejman began during the second module of INMA’s Product and Data for Media Summit.
Using that as inspiration during a one-day hackathon, Blejman, the chief digital officer of Argentina’s Grupo Octubre, supported the idea for a tool to help the newsroom do just that.
What the product can do
Blejman offered a quick example of what the tool, SmartStory.ai, can do before explaining how they got there.
First, he said, they had to start with something small. Rather than looking at an entire finished article, they decided to look at headlines. They also decided to use Google CTR as the indicator of what constituted success.
“We had this idea,” he said, “that there should be a relationship between co-occurrences of words in a headline and that headline’s success,” which was their working hypothesis. They trained the model with years’ worth of past headlines, sorting them into two classes. A CTR of more than 6% was considered a “success,” while a CTR of less than 3% was considered a “fail.”
“With this approach, we obtained between 76%-82% precision” in predicting which class a headline would fall into, whether a story would be a success or a failure, based solely on the headline.
From hackathon to product in six months
After the hackathon day, a prototype was built specifically for the main newspaper in the Grupo Octubre group, Página/12.
The initial landing page was extremely simple. There was a field for entering a headline and the product would tell you if it would be a successful headline or not. When they published headlines based on the information, they added a “title” tag so they could more easily track the results. After publishing 100 articles, they found the CTR prediction was correct in 80% of them.
The next step was to get feedback from the editors and build a road map for next steps. This meant importing previous examples of headlines that were successful, as well as headlines suggested by the AI. They decided to make it a Chrome browser extension to make it easy to implement and use, and added a report-generating function so editors could see whether it was working.
Not only does SmartStory tell editors what headlines are most successful in their newsroom, Blejman explained that the tool also evaluates headlines from competitor sites and even news services like Google News.
While all of this was useful information, editors want to know whether a headline will be successful before publishing, not after, Blejman said. So, they added a function called “Editor’s Mode,” which allows the user to navigate to the tool in the middle of the editorial workflow. After the article and headline are written, the user can test the headline using SmartStory’s Editor’s Mode.
“You can experiment with different headlines of your own,” Blejman said, “or use the headlines suggested by the tool, which uses GPT 3 and our clustering validation.” These suggestions are displayed immediately. Successful headlines get a green mark, and less successful ones get a yellow one. Users can then just “click the ‘use’ button next to the headline you want to use, and that headline goes back into your CMS so you can finish publishing.”
The reporting function gives users weekly “yet manual” reports so they can do deep dives into every headline, comparing the successes predicted by SmartStory vs. what happened in reality.
The extension was added to the Chrome store on September 1, and he said they have 130 weekly active users. It’s a small team of people working on it, but he said they’re shipping about one new feature per week. SmartStory is still very much in its infancy, but they’ve already gotten positive feedback from partners outside their organisation.
Blejman shared four lessons learned, including to interate fast and test within the newsroom. Being able to “sit back and see what others do with your tool in the newsroom is a powerful methodology,” he said.