Audio grows in prominence, brings challenges into an AI world

By Jodie Hopperton

INMA

Los Angeles, California, United States

Connect      

Last week, I hosted the INMA Los Angeles Innovation Study Tour. It’s virtually impossible to summarise 28 hours of visiting companies, eight hours of chatter between meetings, and 11 hours of fun. 

So today I’m going to focus on one aspect of this: audio. 

For a while Ive been thinking the future is likely audio driven (check out this report we published last year). Previously I had only seen the opportunity unfold. Last week, I realised that while there is some opportunity, an AI-driven world could be terrifying for news organisations unless we figure out a compensation model with LLMs/answer engines. 

Are you an audio buff? If so, you can listen to a podcast of this newsletter that I created in approximately five minutes using Notebook LM. Let me know what you think. I’m at Jodie.hopperton@INMA.org.

Best, Jodie

The future is audio: Meta Ray-Ban smartglasses review 

Early this year I had dinner with Jovan Protic, COO of Ringier Axel Springer in Poland, who was coming to the end of his six-month secondment in Silicon Valley (a fascinating journey which I recommend you look into if you haven’t heard it already). He turned up in his new Meta Ray-Ban glasses, and, of course, I wanted to hear everything about them. He was the first person I had met who used them regularly, not just tried them. 

I’ve been following the development of the glasses, and more and more people I know are starting to use them. Being an early adopter, I decided to buy some to try. I wax lyrical about AI lending itself to audio, so it was time I put my money where my mouth is. 

INMA Product and Tech Initiative Lead Jodie Hopperton models her Meta Ray-Ban smartglasses.
INMA Product and Tech Initiative Lead Jodie Hopperton models her Meta Ray-Ban smartglasses.

Here’s what I discovered:

What are they?

They are glasses — regular or sunglasses — that have a camera, speakers, microphone, and built in AI. Somehow all this fits into a fairly standard Ray-Ban style that isn’t too heavy. It’s pretty impressive. The speakers are directional so you don’t have anything in your ear, but others can’t hear the audio you are playing. There is a charging case, again, very much like a regular Ray-Ban case that the glasses sit in. The case uses USBC and the charge lasts several days even with heavy use. 

How do they work? 

Once set up, the glasses automatically connect to your phone via bluetooth. The speakers and microphone work very much in the same way that wireless headphones do. AI is activated by a button or by starting a sentence with “Hey Meta” or “Hey Meta look” if the question is built on visuals. You will then hear audio accordingly.

What am I using them for?

I take calls on them and listen to podcasts all the time. It’s so nice not to have anything directly in my ear. I also find myself using them for photos, especially with my two small kids when I want to get a snap of them doing something cute (these moments can be fleeting!).

But I don’t use the AI much. I am trying to find use cases, but at the moment I’m not finding many. The only one that was genuinely helpful was when I was traveling to San Francisco and could ask all about the Salesforce elevated gardens I was walking around.  

How does this pertain to news? 

I’ll be honest here: I just don’t know. Other than playing podcasts, I don’t see a use case. Could I ask for a news organisation’s view on something? Yes, of course. But you don’t. You use the AI available because it’s easier. This makes me concerned on a higher level, as you’ll see below. 

Will I keep them? 

Yes, these are now my main sunglasses. To be fair, I don’t love that they are Meta. I’d prefer an Apple version. But since that’s not a possibility, I’ll take what’s available. I need to reconfigure them as I get a few too many notifications. For example, I don’t need to be alerted every time I get a WhatsApp message. But I genuinely love them now and will continue to wear them. 

For more check out Ray-Ban’s site here.

Date for the diary: November 6 Webinar on Apple Intelligence, Meta Ray-Ban glasses, and more

I’m going to be diving into much of this plus Apple Intelligence and a few other new technologies becoming available in the United States in our next Product and Tech Initiative Webinar. I hope you can join for what I hope will become a collaborative session.

Sign up here. Or if you have a story to share, please contact me at jodie.hopperton@inma.org. 

Audio is an exciting yet terrifying future for news

During the INMA Los Angeles Innovation Study Tour last week, I had one of those big ah-ha moments when speaking with a large AI company. 

We were debating search and what news publishers would get out of it this particular answer engine, which largely boiled down to traffic for most. The new AI-driven search engines hold much more of an answer onsite, with attribution links which may give more qualified traffic as consumers may want to dig deeper or see more context — or they could well have already answered the question. 

News executives during last week's INMA Los Angeles Tech Innovation Study Tour, looking at the future of audio at Verizon 5G Innovation Lab.
News executives during last week's INMA Los Angeles Tech Innovation Study Tour, looking at the future of audio at Verizon 5G Innovation Lab.

For the sake of this discussion, let’s assume the former is true: New organisations will get less but more qualified traffic. There will therefore be a handoff from the answer engine to a news organisation site. 

We can also make the logical assumption that hardware and software both point to this: Audio is on the rise and this will be accelerated with AI-enabled hardware and software. 

What does the handoff on audio look like? 

It’s hard to see how an audio answer can naturally switch to one of our two biggest audio outputs: podcasts and narrated articles. The easiest and most likely output is a link sent to a device to specific audio. But the number of people who are likely to follow-up when it means a switch of medium are greatly reduced. The traffic benefit simply isn’t there.

To be clear, it‘s very very easy now to make audio content from written text. Many news organisations use Elevan Labs, and there are a myriad of other companies offering similar services. It’s also becoming easier and easier to create podcasts from published content. Check out this podcast I created on this newsletter in approximately five minutes using Notebook LM. 

But in these instances, where is the benefit to the original content creator? There is no traffic benefit to be had. 

Therefore we either need to figure out new audio products that can fit into this environment or create a robust monetisation method based on content used. 

Have you thought about this? What have I missed? I’d love to hear your thoughts. Please find me at Jodie.hopperton@INMA.org or book time with me here.

About this newsletter 

Today’s newsletter is written by Jodie Hopperton, based in Los Angeles and lead for the INMA Product and Tech Initiative. Jodie will share research, case studies, and thought leadership on the topic of global news media product.

This newsletter is a public face of the Product and Tech Initiative by INMA, outlined here. E-mail Jodie at jodie.hopperton@inma.org with thoughts, suggestions, and questions. Sign up to our Slack channel.

About Jodie Hopperton

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT