No easy answer exists in removing Trump (and others) from social media networks
Media Leaders | 25 January 2021
Twitter, Facebook, Instagram, and a host of other social media networks have decided to block former U. S. President Donald Trump from their sites. The debate shows how difficult it can be to navigate in uncharted territory that is fraught with dilemmas.
Some of the questions that remain unanswered are:
- Is it legitimate in a democracy to protect people and society from an anti-democratic and manipulative, but legally elected, leader?
- Does blocking someone from a social media network represent a threat to freedom of expression?
- Should the same rules apply for a social media network as for editorially driven media?
- Is it possible to regulate gigantic social media networks without destroying the free and open Internet?

People decide for themselves whether they want to deny facts and embrace conspiracy theories instead. What society can do is provide everyone with reasonable access to journalism and other verifiable information, and make sure that whatever is illegal in the analogue world is also illegal in the digital world.
The idea of protecting vulnerable groups from deceitful but legal manipulation is a challenging one, and poises on a knife’s edge between interference and an imperative need to protect society from extremism and violence.
Freedom of expression and opportunities for expression are confused in the debate. Freedom of expression is the legal right to express oneself freely. The limitations on freedom of expression are laid out in law after they are considered in relation to other fundamental rights.
For example, it is not allowed to incite punishable acts, as is relevant in Trump’s case. Freedom of expression is not a right to express oneself about anything in any medium of choice. When someone is excluded from a medium, their opportunity for expression is limited, but their freedom of expression is not.
Anyone can start their own blog and express their opinions about whatever they want within the rule of law, but they cannot demand that a publication like Aftenposten publish something they submitted. Nor can they stop Twitter from removing a tweet.
Nonetheless, the key difference between these two actors lies in the rules, norms, and values by which they are governed. Aftenposten is an editorially driven medium with an editor who is obliged to uphold the media’s social mission, the Media Liability Act and other laws, the Editor’s Code, the Ethical Code of Practice for the Press, and Aftenposten’s own publishing principles.
Twitter is not an editorially driven medium, and the legal framework for liability lies largely in the e-Commerce Directive. Twitter has not defined itself as a publisher with a social mission; it has no editor-in-chief and no ethical rules guided by an editorial mission.
The social media networks operate with terms of use that are adapted to the role of channels on which users can share content. Successful regulation of this market requires an understanding of the differences between editorially driven media, social media networks, and telecom companies with responsibility for the technical distribution only.
Digital platforms in Europe and the United States have so far not been held accountable. This means they have no responsibility for removing illegal, user-generated content from their networks before they are made aware of it. The justification for this form of liability regulation is that the Internet should be free, and everyone should be allowed to express their views without being made subject to review before anything is published. No one wanted this type of “surveillance” of the Internet.
The problem is that what limited accountability the networks do have has barely been complied with or monitored, and the European Commission wants to do something about that with its recently proposed Digital Services Act (DSA).
Among other things, the DSA contains a comprehensive set of rules regulating the digital platforms’ responsibility for illegal content. The platforms’ responsibilities for removing illegal content are clearly defined and harmonised, requirements are set for simple reporting processes, and decisions on whether or not something is illegal will be determined and enforced by national laws.
New transparency and reporting requirements are proposed, including additional obligations for very large online platforms, and requirements to show the impact regarding the way algorithms are set up. The purpose of this is to obtain more facts on the problem of illegal content in the networks, better understand how content is distributed, and find out what action the networks themselves take to remove illegal content.
Regulation of areas where fundamental principles conflict with each other are extremely challenging. The alternative cannot be to just give up, but to find a solution where the European Commission’s proposed regulation can strike a balance between freedom of expression considerations and the need to give the networks clearer and more binding responsibility.