Here’s why editorial teams can relax about personalisation
Smart Data Initiative Newsletter Blog | 03 November 2022
Hi everyone.
My new report on personalisation came out last week, so it’s the theme of this week’s newsletter — with a twist. It’s more of a personal essay on a specific angle that I’ve long worked through in my head. As for the report, which is written with the goal to provide good foundations of how personalisation works in the context of our news organisation, you can download it on INMA.org (it’s free to members, or you can purchase it).
Also, today we’re kicking off our Product and Data for News Summit, which I am co-curing with my colleague Jodie Hopperton. Check it out and sign up. And if you’re signing up midway through our five module series, know that all of it is available on demand so no FOMO!
All my best, Ariane
The editorial arguments against personalisation
I worked for several years as a homepage producer at The New York Times. A lot of how I think about the opportunity of personalisation is informed by the challenges of making a product on such a large one-to-many basis, trying to serve very different goals.
When your entire pool of content is good and the number of visitors so large, you wish you had a way to create many more different homepages that allowed a more perfect fit by using some subsets of your users. They could still be “well informed” by the judgment of an editor, but the relative ranking of items may be different and some items may participate in the rankings for a particular group or individual user, which would not be there for a different group or person.
My interest in personalisation started from a place of both frustration in what I couldn’t do, but also in the knowledge that we could do so much better both for the user and also for the news organisation.
Several years later at the New York Times (I worked on the CMS team then), I had the chance to work on a long project that sought to explore novel ways to seed personalisation. I can’t tell you how much I loved that project, and my interest in personalisation was so keen that I decided I only wanted to work on data and personalisation-related projects at that time.
This is why I left The Times and joined Taboola, which knows a few things about massive data and personalisation at scale.
But in working across different projects that touched the question of scalable and efficient content discovery, I also often wondered about the nature of the misgivings of other people in the room. It feels so entirely self-evident to me that personalising news and news experience is a huge opportunity and that the sharp edges of this endeavor can be efficiently polished.
Yet there are misgivings, of course, and they are worth thinking about. Now, the thoughts below are very much my opinion, and I’ll be happy to hear opposing views (just hit reply on this newsletter).
The first is this: Arguments against personalisation in our editorial products that lean on the risk of filter bubbles or bad algorithms are, in my view, strawmen. Filter bubbles occur naturally anyway. We self-select our friends and the circles we run in in our communities. We filter our own media diets. The echo chamber is what we already organise for ourselves (I’ll refer to Axel Bruns excellent book, Are Filter Bubbles Real).
And the argument on bad algorithms assumes far more unbounded datasets than what news publishers work with. The entire universe of YouTube video is very different than the production of a news publisher, which by definition vets every article in the set. In addition, personalisation strategies can go from the truly free-range (deep learning) to the far more tamed and managed (rules-based).
So, where is the anxiety to personalise the news truly anchored?
In personalised news, the role of the editor is to judge articles on their own, but rankings and selections are derivative.
One argument I have heard many times is far more fragile than the other, but is actually what I think is the heart of the anxiety: “The editor knows better.”
In three words — selection, judgment, ranking — it captures the perception and fear that a role long at the heart of our industry and profession is being challenged at the highest level of its authority. Which is judgment, knowledge.
In this challenge, there is a question being asked about what could be changing roles and therefore different identities attached to the folks in editing roles that are closest to selection, judgment, and ranking.
When we look at personalising news, we are shaking some long-standing assumptions about the way journalism is crafted and distributed. We are not necessarily changing journalism, though. But change management and transformation are never easy — and personalisation is not a trivial amount of change to manage and transformation to make in people’s skills, careers, and, therefore, identities.
Personalisation asks a question about the role of editors in ranking content. If the set of articles that is delivered to user A is different from the set of articles for user B, we know rankings are not absolute. And ranking has been part of the role of the editor since the beginning of modern news making.
The consequence of seeing the role of the editor as the “ranker” of editorial content means introducing a new automated function that takes ranking away, creating an existential crisis.
I think a key reason for the unease of many journalists around personalisation is a deep-seated anxiety that their job is being made obsolete by a robot. While many journalists may in fact say that they are not worried about this, I believe this anxiety is latent. Editors will say “I know better,” “personalisation can make mistakes.” They will point to filter bubbles.
And I don’t mean to suggest these aren’t actual objections they are having. But I believe the main issue is really the perception that it changes their job and puts it at risk.
But there is a bit of an intellectual equivalency that has been incorrectly drawn here between judgment and ranking, and this is where journalists need not worry.
Rankings are the derivative expression of a judgment (which can be absolute) filtered through the matrix of the personal preference of a reader. When a news editor in New York tells her boss editor, “This is a key piece to put on our front page in the New York edition but it should go inside for our out-of-state edition,” they are doing a form of personalised ranking. Their judgment is the same — the piece is great — but their ranking is affected by the circumstances and the people who will get these rankings.
Professional journalists already separate out judgment from ranking. Personalisation is not about doing without the value judgements of editor — the dimensions by which we structurally assess the qualities of our content (scores, of which I am a big big proponent: “best by” reading dates, sentiment analysis).
The level of granularity of what editors see and judge is far more refined than what robots can do. Good personalisation algorithms will be good precisely if they lean into leveraging value judgments made by humans (and stored in such a way that the algorithm can read them). Companies that are leading the way in this space are building personalisation algorithms that do this.
Because, importantly, a place where editorial judgment will be unmatched (for some time, at least) is anticipation. Anticipation of what the user needs or wants, or large-scale questions in society, in “playing out this scandal three steps ahead.” This is the kind of learning computers are nowhere near being able to do.
Which is why the role of an editor as a judge is quite safe.
Further afield on the wide, wide Web
- The algorithm is more heartless than humans, and that’s why the rent is so damn high. This is a fascinating look by Pro-Publica into RealPage, a real estate tech company that sells a service to price rent on empty apartments. The real price of things is, of course, ever the classic examination of the power of offer and demand on making an open market. But is an algorithm the ultimate test of the equilibrium between these two forces? It gets deep …
- Our friend Aram Zucker-Scharff of The Washington Post (he spoke at our last master class series in March) presented at the Private Advertising Technology of the W3C group, giving a great overview of where private advertising tech is at.
About this newsletter
Today’s newsletter is written by Ariane Bernard, a Paris- and New York-based consultant who focuses on publishing utilities and data products, and is the CEO of a young incubated company, Helio.cloud.
This newsletter is part of the INMA Smart Data Initiative. You can e-mail me at Ariane.Bernard@inma.org with thoughts, suggestions, and questions. Also, sign up to our Slack channel.