OpenAI tests of scheduled news summaries, weather offer clues for media
Product & Tech Initiative Blog | 04 February 2025
I recently noticed an announcement that OpenAI is doing scheduled summaries of news. Other news staples such as weather, horoscopes, and cartoons are being offered, too.
Why does it matter? Because this is going from query-led reactive user search to proactive alerts. This is the start of AI agents working on your behalf (more about that below).
TLDR: ChatGPT users, and likely all answer engines, will soon be able to get their own personalised news — not by asking for it but as an alert at a personalised time of day.
How are we preparing for this?
The news alerts didn’t work at launch and are taking time to get ironed out — a product lesson in itself. I’ll re-review when this is fully formed, but there is a lot we can learn to start preparing now. Below are a few things I found interesting.
As soon as I saw the announcement, I wanted to try it for myself. There was no proactive prompt anywhere in the mobile or desktop app but, it’s a simple case of asking the question and it’s a very smooth sign up. And by “sign up” I mean dictating exactly what we want.
The product has not given any nudges for this; for now, it’s purely on the user to know this feature is available.

Also note “memory updated” at the top of the response. As chat gets to know you better, it can give better answers.
It’s not just scheduled summaries; breaking news is also included in this feature:

I’m still not getting my news unless I prompt. Some of it is good, such as the media and technology section at the top (both very on point articles for this newsletter). But the location-based news alerts need a lot of work. For example, it’s telling me of a new fire in Los Angeles that is actually a week old and it has told me about before. And apparently there is no news in London.

How is success being measured?
A proactive push leaves little room for feedback.
Attention time could be measured.
Click-through rate: But this format is designed to give you the information you need, not a teaser.
Follow-up questions.
Can we trust the algorithm?
This is a touch snarky to add to this note, but I think you may appreciate why I have added it.
Several days after launch, I asked about the news alerts. As soon as the answer came, I took a screenshot. But within a second, it overwrote itself. Rather than admitting “repeating issues,” the text was changed to apologise for “the oversight.”
Does this mean — shock — that an algorithm can have a secondary layer filtering certain information? As we are seeing with automatic follows and downgrading certain information after the U.S. election, this could prove a serious issue.
Can this be truly effective when it can’t access news you pay for?
I wondered whether I may be able to get paywalled content if I provided my login details. Seems I can’t, although I know that is something being considered — which makes sense because if I pay for these sources, I’d like them featured more not less.
As users click through to our owned and operated products, what kind of experience is being given?
It can be pretty jarring for a user going from a succinct personalised text to a page with a lot of different media, including advertising. Should we be thinking about giving these users a different, smoother, experience?
If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.