Historically, all content was created for a bundle.
As seen with magazines and newspapers, the bundle was designed with a dual-revenue stream in mind, a larger one from ads and the other from readers. In the case of newspapers, the bundle also presumed some degree of exclusivity of readership (read: monopoly), and thereby the need to cater to a wide range of customers.
Thanks to the high profitability that these publishers saw, they were able to invest in and commission content of extremely high quality. Witness long-form articles in Esquire, Rolling Stone, GQ, or Vanity Fair, or even in the newspapers in mid-tier U.S. cities, which are often out of place with the rest of the content (photo features, fashion spreads, local news, etc).
Curation was thus built into the very act of creation or commissioning. Such a high degree of filtering at the stage of creation, and the constraints of print (limited pages and finite titles) meant that little was published, it was typically always of high quality, and it was well paid for. Most importantly, it was still possible (for someone with a lot of time on hand) to read almost every single high-quality story.
The Internet, however, changed things. It removed the constraints of space, it removed the formidable entry barriers that restricted the entry of newer media outlets, and, lastly, it eviscerated the bundle.
This leads us to three very interesting and well-known problems.
- There is so much content out there, you can’t read even 1/100th of high-quality long-form content.
Take science writing alone. I can readily think of three high-quality, science-only content sites – Nautilus, Aeon, and Quanta – and I am sure there are more out there. The old ones such as Wired, Popular Science, Popular Mechanics, and Scientific American are all there as well.
- I would hazard that you can’t even discover one-tenth of such content. Volume is also the enemy of discoverability, not just consumption.
- There is the challenge of compensation for the content. Historically, ad revenue indirectly paid for the content. Today the broken bundle means the subsidy has entirely disappeared, even as the brakes on creation of content has been lifted.
One way to address the first two problems – too much content and not knowing which to read – is through investments in curation. You can do this by adding algorithms (and some human editors) to existing content streams to help interesting stuff surface.
Examples include Twitter’s Moments, Reddit’s Upvoted, NYT Now, etc. A little further down the curve are platforms such as Medium and LinkedIn, which build in discovery as a central feature of their platforms and are able to surface relevant or interesting content more easily.
But what if you created a platform that was built from the ground up to enable discovery of content? For this platform, discovery would be the primary problem to solve, unlike a medium where it becomes the second problem to solve.
The old way was to think about bringing in content first, and then ensuring discovery via tools or editors. In this re-thought platform, you worried about discovery first, and then worked backwards to worry about how content to such a site should be structured.
So how would such a discovery-led platform work? How would it look and feel?
It perhaps wouldn’t feature a text or document editor. The point is not to make it easier to write. Enough tools and sites exist for that.
Instead such a site would primarily look to do the following:
- Categorise content well so as to help connect writing to the reader looking for that content. For that to happen, some work would have to go into building an interest graph for the reader, or perhaps using existing graphs such as Twitter to understand what he is likely looking for.
I am not even sure the site has to host the content itself. Instead, it could aggregate links to sites such as Linkedin, Medium, or even personal blogs. However, what the site does is encourage writers to submit these links with richer metadata and descriptors. And, of course, there would be editors who manually add descriptors and categorise this content.
Tags, you say. Well, yes, tag by all means. But what I am referring to is more akin to the descriptors that we see assigned to songs in Pandora’s Music Genome Project. I am not sure we need to get to that detailed level of taxonomy, but it would need to go beyond the kind of simplistic tags that exist now to categorise by such descriptors as places detailed in the article or type of analysis (start-up postmortems).
- Put some kind of constraints on creating or adding posts. Posting new content would be limited to once a week or linked to some other condition such as read “x” number of articles or follow “y” number of people on the network.
One concern with this is that it could impact growth of the network. Perhaps one solution would be to go easy on the rules initially to get a critical mass going, and then bring in some restrictions. This is, in fact, the exact opposite of what Medium and LinkedIn have done. They restricted publishing to a smaller group to create buzz, and then let all and sundry do it.
There are other examples of constrained publishing. This is a recent and interesting experiment in curation by constraint. It is a Twitter-clone with one big difference: It limits you to one post (sharing a link) a day.
And there is Wikipedia, which is another interesting example of a platform that constrains creation. It is not easy to create new topics (especially of people) or even edit existing ones easily on Wikipedia. The continuous policing of content creation by a strong community of editors, ironically making it much harder to post, has resulted in extremely high-quality, trust-worthy content.
- Link discovery or distribution of content to contexts. It would push out content by leveraging the metadata or descriptors through data collected from third-party applications/using non-media APIs. I alluded to this in a previous post of mine.
The idea is to link consumption of content to certain contexts, such as sending a story equal to the length of my Uber ride, or, say, getting to read a story set in a city that I am visiting, using Uber and Google Maps APIs respectively. Identifying contexts is a matter of ingenuity – one can imagine a scenario where you send breaking news content to a person who is waking up, which is found out via FitBit data.
Thus the platform would build in discovery via curation by categorising, constraint, and context. Such a discovery-first platform (versus a publishing-first platform such as Medium) is one way in which a new player can create space for himself in the platform marketplace today.
I would also think such a platform, where the chances of readers finding the content that they want to read is higher, is also more amenable to a subscription or pay-per-article model. Thus solving for the first two problems – too much content and not knowing what to read – eventually solves the problem of getting people to pay for it too.
Let us now get back to the bundle.
Accompanying the break-up of the bundle has been a parallel trend: that of more and more closed platforms emerging to enable content consumption – Facebook, Apple, Flipboard, Medium, etc. As consumers primarily discover your content via these sites and not the bundle (printed or online), we are really seeing a move toward atomisation of content.
Let me give you an example. Historically, to consume a story by Esquire, say this outstanding piece on the MH370 hunters, you went to the magazine or the Web site, and, thanks to that, looked at content in and around the story. It was experienced as part of a mix of stories.
Today you discover the story thanks to Twitter or Facebook. You read the story and close the link. In fact, after a couple of days, you will remember the story but not where it was published.
What does it mean to be a publisher or bundle-owner in these times? As atomisation becomes a reality, how does it impact publishers?
If Washington Post’s content is increasingly being consumed on Facebook, and typically in an atomised format, then it is unlikely that Washington Post will be able to extract any value for the bundle itself (i.e., the collection of stories and the way it has been curated by editorial).
The whole was always greater than the sum of its part. That is no longer true now.
Still there will always be value for a distinctive editorial voice; that is, a story that is so typically New Yorker or Wall Street Journal’s A-Hed. And, over time, a kind of marker or brand may develop to signal to readers that it is worth reading. But it will not be a bundle (of varied items), but rather more of the same content.
How should publishers react to the atomised world?
One route is to move from full-stack players to no-stack startups (courtesy of Andy Weismann).
Can you be a publisher if all you do is source stories and focus on getting it read on other platforms such as Medium or Facebook? There is some precedent here with publishing brands such as I Fucking Love Science on Facebook and The Shade Room on Instagram. Still, these are both bundles of a kind, hosted on third-party sites.
What I suggest instead is a media company that commissions stories across a multitude of topics and gets to play them across different sites (some ad led, some paywall sites). There is no Web site where you can access the entire lot of stories (or there could be, but it wouldn’t matter).
Think Random House but for articles. Just as it makes no sense to have a Random House bookstore, instead focus on commissioning the most interesting books and striking deals with distributors (Amazon or Barnes & Noble), and the publisher of the future will also decouple itself from the distribution platform.
This bundling of content, which, after all, is really curation of a kind, is now moving downstream closer to the consumer. And it is also in many ways, being created by the consumer. They are creating the bundle they want to consume.