New Reuters Institute report sheds light on logged-in users, e-mail, selective news avoidance

By Ariane Bernard


New York City, Paris


Hi everyone.

The Reuters Institute for the Study of Journalism at Oxford University just released its annual Digital News Report, and Ive begun to dig into it, looking for what this may tell us about our field of data. To be clear, it’s a lot (but in the best way). So to get us started, I’m looking at two different findings from this report: 

  • How users are responding to our attempts to log them in.
  • And what “selective news avoidance” may mean for media companies that are using content personalisation. This phrase reflects when users say they avoid news for various reasons.

Did you find something in the Reuters Institute report that you think I should look at? My inbox is open at

Until next time, Ariane

First-party data: The uphill climb to log in users and get them to read e-mail

The just-released 2022 Digital News Report from the Reuters Institute at Oxford University notes two items of particular interest in our data-invested corner of the world: low prevalance of reader registrations and low trust that publishers will do the right things with data.  

The first item is the somewhat disheartening perspective that not only are barely above a quarter (28% ) of news readers registered for one or more news sites but that media companies are not terribly well-positioned to improve on these numbers: “Across our entire sample, only around a third (32%) say they trust news Web sites to use personal data responsibly — ahead of social media sites (25%), but at a similar level to trust in online retailers (33%),” according to the Reuters Institute report. 

Findings about trust from the 2022 Digital News Report from the Reuters Institute at Oxford University.
Findings about trust from the 2022 Digital News Report from the Reuters Institute at Oxford University.

The report further shows a “clear link between general trust and people’s willingness to trust publishers with their data.” This has the implication that — when it comes to the battle for logging in users and being able to be trusted with our users’ data — the institutional positioning, the editorial clarity, and product expression of the publisher’s trustworthiness are far more at issue than the more technical dimensions of a well-functioning login system or the pure capabilities of a customer data platform.

To put it another way: It tells us that our ability to successfully build businesses powered by good, clean, first-party data is first going to hinge on creating value and purpose, as well as knowing how to communicate that value and purpose to the user so we can credibly log enough users. This comes before publishers can even begin to try and optimise how to use this data. 

Furthermore, two other dimensions affect our ability to log in users. 

  • Smartphone prevalence: The first is that smartphones remain the most prevalent devices to access news continuing to gain share, and this has historically been a device where log-in experiences were subpar relative to the desktop. So, everything else being equal for a set of users with enough trust to accept to register, fewer of these users will be logged into their phone. 
  • Decline in direct access: The second dimension is the continued decline of users who access news sites directly — from the homepage of a news site or their native app: 32% were coming in via direct access in 2018 versus 23% in 2022 on average worldwide. 

The report highlights that this is partly due to generational changes with new, younger readers having built their news habits from social media. Regardless, this is important for news publishers because our ability to obtain data — both the quantity and quality — is also lower with these audiences. Direct access to our products is usually the fact of a well-habituated user — the type who will be more likely to agree to register and log in.

Now, onto everyone’s favourite old-new product: e-mail newsletters —certainly a province of hopes and dreams when it comes to first-party data since the e-mail newsletter user is a known user (for the most part). The report notes that while e-mail remains a productive channel for news publishers, “despite the increase in the supply of newsletters in the last few years, the proportion accessing them has actually fallen in many countries, in part because of increased competition from newer channels such as social media, online aggregators, and news alerts via mobile phones. In the United States, weekly use has fallen slightly from 27% to 22% since 2014 as the use of mobile alerts has tripled from 6% to 20% and social media access has also grown.”

The annual report was released on Wednesday.
The annual report was released on Wednesday.

This is troublesome for news publishers because e-mail can also be considered a soft login when clicking through. So, having more e-mail users isn’t just good for habits and sheer pageviews, it’s good because it’s a trackable user. Depending on which e-mail sending platform (ESP) a publisher uses, news publishers can use links with tracking identifiers that can uniquely identify the e-mail address to whom this link was sent. 

Sure, Apple’s changing privacy protection rules now make individual opens impossible to track, but link tracking is still around. 

As a publisher looking to enrich your data about a known user’s content preference, you could consider the click from a personally identifiable link a form of “soft log in” and identify a whole session for a given user even if they weren’t “properly” logged in. 

In other words, a good e-mail originated session is almost as good as a regular browsing session done after a proper log in. The term “soft log in” will vary in its meaning and reach by publishers; it’s certainly a product decision to act on it. The most common soft login is when a user is clicking on an unsubscribe link. They are not made to log in, yet the publisher ESP knows very well which user this is because the link has a unique identifier on it, and this is how they know to unsubscribe a specific user from a list.

So, where next?

On log-in, we may see some improvement through purely mechanical ways. But the bulk of our work is clear: trust and a product that conveys the upside of registration.

The mechanical improvement that may be helping us is that Big Tech is working on leaning on the FIDO alliance to, essentially, make login far easier and automated. 

On e-mail, there is no real upside because if the new Reuters Institute report tells us anything, it is to not over-hype ourselves (punchline: pivot to video). Sure, e-mail is having a resurgence in that, if anything, we as an industry probably unfairly discounted it for a while. But it’s not a silver bullet, just like subscriptions aren’t a silver bullet for our revenue challenges. 

When it comes to data, the publishers ability to build a clearer and better picture of users is going to come from working on several angles, all at once. Just because new newsletters are being born every day doesn’t mean they will save publishing — or publisher data.

“Selective news avoidance” and personalisation

The Digital News Report also highlights a growing trend of what it calls “selective news avoidance,” the phenomenon where users avoid news that will depress them or which they feel will just create arguments. 

This number is, on average across countries, 38% in 2022 versus 29% in 2017, with marked differences across markets. This is particularly interesting for us as data practitioners because, beyond the immediately problematic implications for better-informed societies, we know this is the kind of behaviour that personalisation algorithms will also take notice of and respond to, in their own way. 

Research on news avoidance from the latest Digital News Report.
Research on news avoidance from the latest Digital News Report.

Most personalisation algorithms are driven to emphasise things where they see positive (active) user response. This is, after all, the goal of personalisation — to give you more of what you want as a user.

As it stands, most personalisation algorithms will use click-through rate both as an element of training and as a tracked outcome. For example, it’s possible to have personalisation algorithms that use other dimensions, but that’s not likely to be prevalent. 

So with users already telling us they are less likely to engage with hard news, personalisation algorithms will further depress the ability of this kind of news to be discovered in highly mediated spaces like social media news feeds. 

While nobody knows Facebook’s secret sauce except Facebook, there is definitely a factor in the Facebook personalisation algorithm that looks at baseline engagements for a page’s post to decide whether a post is under-performing or over-performing and further boosts posts that over-perform — which is why babies and wedding posts always win at Facebook.

Indeed, the new Reuters Institute report notes that “those who often avoid the news are twice as likely to say they see too much news on both Facebook and Twitter when compared with the average user.” 

Now, media companies are also compounding this problem with their own bias in selection. In a paper released in late 2021, University of Antwerp researcher Kenza Lamot looked at the headlines that various Belgian news media were promoting on Facebook and found the media was already “softening” its selected promotion relative to its actual production. The media, knowing that hard news doesn’t “reach” as much, doesn’t throw it in Facebook as much. 

The report finds TikTok's personalisation algorithms are different than those of other platforms.
The report finds TikTok's personalisation algorithms are different than those of other platforms.

An interesting tidbit in the Reuters Institute report pushed against the narrative that personalisation algorithms can only be the enemy of hard news as it spotlighted the growth of TikTok as a news source for a younger generation.

What is interesting here is that TikTok’s personalisation algorithms are somewhat different than Facebook’s in that the source of a piece of content is a very light factor in its discovery logic. 

I am basing this observation on The New York Times articles from a few months ago, which leaned on a leaked internal explainer for the algorithm. However, this leaked explainer is not to be found on the Internet. I went down some proper rabbit holes to look for it and couldn’t find it.

With TikTok, if a creator — even an unknown creator followed by few people — produces an engaging piece of content that people actually end up watching (for example, they get hooked in their first few seconds and keep playing), a viewing user may very well end up being shown news and content outside of their filter bubbles because the main dimension TikTok is looking at is the clip rather than the source. 

Says a 22-year-old female in the United States: “It’s so addictive … and where it lacks in trustworthiness, it excels in presentation. It’s a news source I end up consuming because I’m also often scrolling TikTok for other reasons, but the algorithm ends up providing news anyways.”

This leaves us:  

  • With some added responsibilities to write responsible personalisation algorithms that have news diet diversity as a dimension to defend in their output, even if it is at the expense of some algorithmic performance. TikTok says they address this in their algorithm.
  • Or using several different personalisation algorithms on users, where filtering isn’t so personal that diverse content can’t enter the race for consideration (this is more the TikTok model). 

Much like a parent who is unsuccessfully introducing broccoli to their toddler, they will offer broccoli again at a later time. And other types of green vegetables. Even if the candy reliably gets selected, it doesn’t mean the personalisation algorithm can’t be written to selectively put broccoli back on the menu again and again.

Further afield on the wide, wide Web

In the last newsletter, I mentioned I liked stories where computers pretend to be humans (or when humans try to make their computer feel more humans). Well, this week:

  • A little round-up from the the headlines where a Google engineer went to The Economist to explain how the AI system he worked on has become sentient. 
  • And this story in The Washington Post about Google’s reaction and some of the broader questions.
  • This take from Ernie Smith’s Midrange newsletter argues that it’s more of a public relations issue for Google, ultimately, for how it may treat its AI scientist. 
  • And Data & Society in its own newsletter linked back to their article by M.C. Elish and danah boyd from 2018, “Don’t call AI magic”: “[P]erceptions and expectations of AI and systems driven by machine learning have become unmoored from reality,” they wrote then.

About this newsletter

Today’s newsletter is written by Ariane Bernard, a Paris- and New York-based consultant who focuses on publishing utilities and data products, and is the CEO of a young incubated company,

This newsletter is part of the INMA Smart Data Initiative. You can e-mail me at with thoughts, suggestions, and questions. Also, sign up to our Slack channel.

About Ariane Bernard

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.