As UX changes, so should the interfaces news publishers build

By Jodie Hopperton


Los Angeles, California, United States


Hi there.

The U.S. was mostly closed down last week for Thanksgiving. While this is not a holiday that I’ve grown up with, its one I can get behind as it’s focused on food and family. 

Over the holiday, I’ve had a chance to reflect on a couple of things that have been bugging me. Mainly these revolve around the UX of all the new products (or part of) that we’re building. Those products — combined with the possibilities that AI offers us for consumers — make me wonder how were making it easier for the consumer to navigate. 

If you have been thinking about these problems — or opportunities, depending on your point of view — I’d love to hear from you. E-mail me at or book time with me through Calendly here.

Thanks, Jodie

UX is changing — these are the interfaces we should start building for

We need a new name for the article page. With all the trials of AI tools, the “article” page is no longer about the article. It’s the right content to tell a story. In fact, I’m coining the term “story page” for the purposes of this article (and beyond if it sticks!). 

Aside from the headline, byline, date, and likely some kind of visual, a static article page may have some or all of the following:

  • Photo.

  • Video (horizontal or vertical).

  • Audio article.

  • (Interactive) graphic.

  • Summary.

  • Text.

  • Related content.

And, as we know, stories have multiple layers (broken down in modular journalism). 

Thats a lot for a consumer to digest when looking at a page. By offering many things, we may be making it harder for those consumers to get to what they want.

The story page becomes fragmented with elements telling the same story in separate ways, not building on each other. The more offerings you have, the more likely it is that you need a new approach. I’d love to see an AI tool that helps us personalise this, but I am yet to find one so decluttering or one that offers the right format.

That’s one end of the spectrum.

Now let’s look to the future with devices that are already being trialed: Apple VisionPro, Humanes new “pin” (although I am not giving good odds as to whether this will take off), or even Rewind’s (slightly scary) Pendant.

Or perhaps the mid-term interfaces such as ChatGPT and other chat products. Or even smart speakers (which are likely to have more utility as chat features become more mainstream; more detailed on the recent audio report here). 

All this to say that interfaces are changing. We’re likely not pivoting one way or another but are offering rich formats and choices for our readers — all the while consumer interfaces are likely simplifying to chat or audio.

So while we get excited about augmenting what we have and building new things, we mustn’t lose sight of our consumers and their desire to get content served to them on the right topics, at the right times, in the right formats. It’s a multilevel Rubiks Cube that needs a lot of thought, planning, and testing.   

If you have been thinking about this problem, I’d love to hear from you. E-mail me at or book time with me through Calendly here.

Date for the diary: Tomorrow’s Webinar about what the world of product has in store for 2024

Behind the scenes at INMA, the Product Initiative Advisory Council meets regularly to have candid conversations about the product organisation space and how each group is tackling them. For the first time, INMA will open this conversation up to look back at the major issues they have been focusing on this year, new products that have been launched, how product teams are changing, and what’s on their minds as we go into 2024.

I’ll be in conversation with: 

  • Riske Betten, product director at Mediahuis.

  • Kara Chiles, SVP of consumer products at Gannett.

  • Julian Delany, CTO data and digital at News Corp Australia.

Please join us and ask your burning questions to these incredible product execs. Sign up here and feel free to send any questions ahead of time to me at

Audio products and enhancements are developing at lightning speed — here’s how to use them  

This pace of technological change is killing me. In September, we released an audio report packed full of useful insights on building audio products. We already need to add a new chapter.

Reporters have long had tape recorders to enable them to go back and double check what someone has said. This is now available on steroids.

Not only can you replay audio, but it can be automatically transcribed in real time, have notes added, and summarise the conversation. If it’s a presentation or someone is sharing a screen on a video call, the slides are automatically placed at the right points in the transcription. The transcript is searchable. In fact, all your transcripts are searchable so you can also see who else was talking about the same topic or using the same phrases. 

To be fair, I didnt need to lead a recent INMA Silicon Valley Study Tour to know that. As anyone of you who has spoken with me knows, all my calls are transcribed by I couldn’t even begin to tell you how much time it has saved me, nor how it has improved my memory and accuracy.  

Technically you can join meetings without actually joining the meeting. Sometimes I’d love the TL:DR version of a one-hour call! It’s not so far into the future that the assistant will know your input so well that it could predict what you can say, which means you literally don’t have to join a meeting — although I’m not sure how much I would trust that until I see it in action. I also hate the fact that I am probably that predictable! 

Right now is only available in English, but they tell us they are planning to move into other languages. Competitor Trint, which was started by a veteran journalist (and, for transparency, I helped launch in 2015), is already available in multiple languages and talks a lot about their collaboration tools. 

We collaborate in documents but being able to do it real time is helpful. And having an AI assistant on a call can be helpful, too, especially with acronyms (IYKYK).

This tool has a host of other benefits, too — transcriptions and summaries of podcasts, which in turn helps SEO. This makes it easier to translate content into multiple languages, opening up to different markets or different languages that should be served within a single market (hello friends in Switzerland, South Africa, New Zealand, and others).

But wait. You don’t need the translation step to turn words into another language. Synthetic voice companies such as Resemble AI simply create the version in many different languages. But wait, you ask, you can’t always just translate as there are analogies and references that don’t always make sense. I know this as a Brit in America when I talk about the weather in Fahrenheit or hear the term “inside baseball.” These large language models are building a library of alternatives to make the languages truly native. 

Everything I have heard about narrated articles is positive — more engagement, longer listen times, etc. — but I have a bugbear. Articles are written to read — not to be listened to. Personally I find them hard to engage with and sometimes zone out. Good news folks: With a simple prompt, you can get the text changed to be more script-like, or chatty, which helped enormously. In fact, there are many prompts you can give to change the text to make it more appropriate to an audio product. 

Maybe you create podcasts and have someone go through and edit the ums and ers or try to figure out how to get rid of background noise. Companies like Resemble and Adobe Podcast enhance suite take average audio and make it excellent. In fact, we used the latter in the audio version of the audio report. It cleared up a couple of awkward moments with the synthetic voice — just like magic. 

When we learned about Schibsted’s synthetic voice, we talked a little about how they went about identifying what their “brand” sounds like. Now you can play around with that fairly easily. Maybe you give readers the choice? Or have a synthetic version of all journalists’ voices. Timelines have come down from several months to create a voice to just minutes.  

The companies that I have spoken with all have CMS integration available through an API. It takes a little configuration, but it’s now reasonable to integrate a full audio offering without having to set up studios and editing desks in your newsroom.  

Another impressive service I recently came across was Deepgram, which has a speech-to-text service only available as an enterprise version through an API. It was mainly developed for call centers but also has uses in news: sentiment detection, finding the right place within an audio product to place or track ads. 

Many of these AI audio product enhancements have been announced in the last couple of months. Whatever we think about the speed of change, one thing is sure: They present serious opportunities for news media at a fraction of the cost of creating all this manually. We would be crazy not to evaluate them and the benefits they can bring to our readers. Or perhaps we should call them our (potential) listeners ;). 

About this newsletter 

Today’s newsletter is written by Jodie Hopperton, based in Los Angeles and lead for the INMA Product Initiative. Jodie will share research, case studies, and thought leadership on the topic of global news media product.

This newsletter is a public face of the Product Initiative by INMA, outlined here. E-mail Jodie at with thoughts, suggestions, and questions. Sign up to our Slack channel.

About Jodie Hopperton

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.