I don’t consider myself an expert on fake news (defined here as the spread of misinformation or disinformation online). However, given the fact I run investments for a media venture capital (VC) fund, I’ve likely seen more pitches for start-ups trying to solve the fake news problem than any other VC.

Here’s what I know: Artificial Intelligence (AI) is flawed because, no matter how you build the algorithm, you’re basing it on the history of news publications, which will inherently bias it against newer and/or niche publications.

Regardless of a machine’s sophistication, it can never report news like a human.
Regardless of a machine’s sophistication, it can never report news like a human.

If you were to build an algorithm that didn’t disadvantage new players in the space, how would you do that? You couldn’t base it on site pages because established players would always win. You couldn’t do it based on Pulitzers won or numbers of reporters on staff for the same reason. You shouldn’t do it based on other sites reporting the same story as a quasi-crowd-sourced seal of veracity, because, in breaking news situations, reporters get the story wrong.

This is why organisations have to issue corrections.

So, how do you build an algorithm that doesn’t crowdsource, doesn’t disadvantage new players, and doesn’t send false positives (where a true story may be flagged as fake)? A natural inclination might be to use human fact checkers. It works for Wikipedia after all.

But human fact checkers can’t work for news because news is fundamentally different than Wikipedia. The value of news is that it’s new. While the information on a Wikipedia page may evolve over time, users of Wikipedia don’t need the information to be accurate up to the latest minute. News, by definition, has to be up to date. And humans can’t fact check fast enough to keep up with a story. That’s the whole point of reporting: The reporters (or publications) are supposed to check things themselves before they publish.

This begs the question (and it pains me to even ask): What is truth in the context of reporting? And how do we know we can trust the organisation reporting to us?

I wish I had the answer. Unfortunately, I think we’re stuck with the prospect that maybe we have to hold each other accountable. We have to read critically and question when something seems out of the ordinary. We, as the news-consuming public, must be willing to put in the work.

In June of last year, I wrote we need to make being uninformed a taboo in this society. Unfortunately, I think we need to add “lazy” to my list of necessary taboos. It’s not the case that fake news is going to be solved (though if you still think you can solve this problem, feel free to pitch me). So, I’m now writing that we must stop thinking there will be a solution that comes from outside our own individual energy expenditure. We are each our own fake news solution. It’s more work, but there’s no alternative in sight.