I have been steeped in the depths of Internet metrics and how to actually measure online audiences for a while.
In one of the master’s classes I took in Madrid in June, one of my professors traced the history of the Internet from its origins and its first measurement techniques to its social and economic importance today. He explained why he does not have a Facebook account, and he does not download any app that asks to access the personal information stored on his cell phone.
When we went on to discuss the challenges of the digital world we live in today, what followed left an “elephant" in the room and a knot in my stomach too.
It was quite a lesson. And, no, it’s not the Internet’s fault!
The Internet was not born in the library of an American university to connect people. Nor was it born as a weapon of war to spy on the Russians. Before it existed, there was ARPA (which stands for Advanced Research Projects Agency), the first operational network, connecting military computers and research departments, back in 1969. Its purpose was to be indestructible.
Paul Baran was hired for the mission, which was to develop a system that could maintain communication between two points, even if the worst happened. The fear was of a possible nuclear attack in the heat of the Cold War.
Soon after, NASA used the invention to launch ARPANET, likely the “mother” of the Internet. The ARPANET was then divided: Part of it continued for military purposes, and another part became public, was renamed “Internet,” and was managed by the National Science Foundation (NSF).
In the 1980s, all the networks started to respect a common protocol of transmission (IP) and the first personal computers were developed. Back in my lovely home, Brazil, we only got access in the 1990s, when the United States decided to privatise the invention, and the Internet, as we know it today, was born.
If the mother of the Internet is ARPANET, the father is definitely the physicist Tim Berners-Lee. Before him, the Internet was just the Internet itself — not the “World Wide Web.” In 1989, when working at Cern, a European nuclear research center in Geneva, Berners-Lee developed the first successful communication between an HTTP client (a protocol that transfers hypertexts and, therefore, information) and a server using the Internet, now with capital letter.
He was also the creator of the first “browser.” Berners-Lee went on to release this invention into the public domain; he did not have any commercial goals associated. Today, he is the director of the World Wide Web Consortium (W3C) and continues to oversee the development of his creation.
The NSF also had a clear policy for Internet use, which did not allow for commercial activities. But with the privatisation and speed of replication of Berners-Lee’s invention, these processes soon became history.
Internet providers began to appear, giving those home computers a chance to connect to the network. In 1994, Netscape, considered one of the first companies of the Internet bubble, was born, and soon after came AltaVista and Yahoo.
Unlike Berners-Lee’s idea, which was an open and editable browser, Netscape provided a closed service, very well-explored commercially.
Google was born in 1998. The initial mission of organising information from the Internet goes far beyond what the search engine has become today, almost 20 years later. In the article “The Anatomy of the Large-scale Hypertextual Web Search Engine,” Larry Page and Sergey Brin presented the Google prototype in 11 pages, explaining how it would work.
Back then, they both criticised other biased and commercial search engines; this is very different from what exists today. Today, Google’s commercial nature is undeniable, and its business clearly stands on the control that it has over the (digital) steps of users and their searches.
The Web 2.0, focused on participation and collaboration, became popular in the early 2000s. It was not a technical implementation on the network, but rather the way pages were designed and used.
Blogs became popular and, in short, social networks were born. Fotolog in 2002; Delicious, LinkedIn, and MySpace in 2003; Orkut and Flicker in 2004; Twitter in 2006; Tumblr in 2007; WhatsApp in 2009; Instagram in 2010; Google+, Pinterest, and Snapchat in 2011 — to recollect only the ones I had, or still have, an account with.
Facebook was also from 2004. It had the simple goal of connecting college students. The ambitious mission of “giving people the power to share information and make the world more open and connected” came later.
And, just like Google, it did not come to the world with the evil goals of putting democratisation of information at risk, spreading fake news, spying on us, storing our memories, isolating us into communities and bubbles, controlling our desires and impulses, and offering products and content based on our previous choices or the choices of our friends.
They were not created to decide arbitrarily, with rules of their own that are beyond our control, what we should read and watch. They were not designed to be giant editors of what we should or should not be exposed to. But all this, and more, happens today.
Apparently, two of the world’s largest technology (and media) companies had no agenda, but this duopoly now accounts for almost half of global digital advertising and consumer market share. It is a place where we are all now immersed.
Now that the game has turned, and all of this is a real and frightening truth, who is to blame? And, consequently, who is responsible for keeping the digital environment safe, free, and democratic?
It is safe to say the Internet that Berners-Lee created — an online place where I can connect and exchange information with you all — is not!