Silencing Trump, preserving free speech?
33439
post-template-default,single,single-post,postid-33439,single-format-standard,bridge-core-3.3.1,qodef-qi--no-touch,qi-addons-for-elementor-1.8.1,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-smooth-scroll-enabled,qode-child-theme-ver-1.0.0,qode-theme-ver-30.8.3,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-8.0.1,vc_responsive,elementor-default,elementor-kit-41156

Silencing Trump, preserving free speech?

Following the riot at the US Capitol on January 6, the social media platforms Facebook and Twitter, followed by other online platforms, suspended and then banned President Donald Trump’s accounts due, as Twitter put it, “due to the risk of further incitement of violence.”

Many welcomed the action and some called it long overdue. Others claimed the actions amounted to censorship that would backfire. And many more wondered about the wider implications. Should the same platforms “deplatform” other world leaders who intentionally spread rampant disinformation? Or are their actions tantamount to curtailing freedom of expression?

International Fact Checking organisations had mixed reactions, but Natália Leal, content director at Agência Lupa in Brazil, identified the main issue as what is private vs public space: “Facebook shouldn’t be seen as a public space since it develops and maintains algorithms that answer its commercial and political interests.”  Their responsibility then, is in developing clear terms of service which they apply comprehensively.

Twitter itself, however, does see itself as a public interest service, and its stated

public interest framework “exists to enable the public to hear from elected officials and world leaders directly. It is built on a principle that the people have a right to hold power to account in the open.”

Already in Europe, lawmakers are calling for stronger regulatory response to disinformation rather than rely on the platforms themselves to self-regulate. They seek a consistent regulation across all platforms.  But that also affects response time. Platforms can take immediate actions; legislation and enforcement can take months if not years.

And others say the problem is larger and rooted in the “extraordinary control over communications infrastructure” from the likes of Facebook and Google. As an opinion article in the Guardian stated, “It’s their dominance and business model that promotes conspiratorial, fake and violent content to millions.”

People in other countries have long had to deal with polarization and censorship. On January 12, Facebook removed the accounts linked to the Ugandan government and the government shut down the internet ahead of the January 15th presidential election. The government’s action has taken away the means that civil society uses to inform and mobilize voters, namely social media. So while social media platforms may try to censor potentially volatile disinformation by governments, governments can also use the same strategy to shut down dissent.

In the midst of this debate, the reality is that those who actively foment disinformation and instability have active ways to change tactics that allow them to adapt to any platform or technological change as documented by Brookings.

What is clear is that the dominance of online platforms in shaping public discourse has to be approached from multiple levels in order to effect real change that protects democratic participation and accountability. We all have to be active, as individuals and organisations, to preserve freedom of expression and the full range of communication rights that ensure truthful voices are heard, accountability is guaranteed, and democratic participation is enabled for all.

Photo: Julian Leshay/Shutterstock

No Comments

Sorry, the comment form is closed at this time.