MD 2022/3 Editorial
47389
post-template-default,single,single-post,postid-47389,single-format-standard,bridge-core-3.3.1,qodef-qi--no-touch,qi-addons-for-elementor-1.8.2,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-smooth-scroll-enabled,qode-child-theme-ver-1.0.0,qode-theme-ver-30.8.3,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-8.0.1,vc_responsive,elementor-default,elementor-kit-41156

MD 2022/3 Editorial

The theme “Democratizing Communication, Rediscovering Solidarity” suggests that there is an essential link between people’s capacity to communicate their concerns and aspirations and their ability to bring about greater political and social justice. 

We know already that Putin’s savage, criminal, and inhuman war on Ukraine was in part facilitated by social media propaganda and State control of mass media – especially television. Kept in ignorance or lied to by the State, the Russian people could not express their solidarity with their Ukrainian cousins. And in the Philippines, Ferdinand Marcos Jr.’s election to the presidency was allegedly due to a flood of online trickery and disinformation in the run-up to polling, effectively silencing public criticism and opposition.

In their 1988 book Manufacturing Consent: The Political Economy of the Mass Media, Edward S. Herman and Noam Chomsky argued that people could be manipulated by covert propaganda and systemic biases to provide consent for economic, social, and political policies, both foreign and domestic. This “propaganda model” identified corporate media as businesses interested in the sale of a product (audiences) to other businesses (advertisers) rather than in public service journalism. It also critiqued the growing concentration of media ownership in many countries.

Published just before the rise of the global Internet, the propaganda model could not have taken into account the impact of social media networks (including the sale of consumer data) nor the pervasive influence of Big Tech. Now, in The Power of Platforms: Shaping Media and Society (Oxford University Press, 2022), Rasmus Kleis Nielsen and Sarah Anne Ganter have identified forms of “platform power” that tech companies are able to exercise at scale. According to the authors, the five most important aspects of this platform power are:

  • The power to set standards that others in turn have to abide by if they want to be part of the social and technical networks – and markets – those platforms enable.
  • The power to make and break connections within these networks by changing social rules (“community standards”) or technical protocols (search and social ranking algorithms).
  • The power of automated action at scale as their technologies enable and shape billions of transactions and interactions every day.
  • The power of information asymmetry relative to users, competitors, regulators, and other outside actors, as they operate as opaque black boxes where outsiders can only see input and output on the basis of limited and biased data and the platforms alone are privy to how the processes work and have access to much more detailed data.
  • The power to operate across domains, where the data collected through a photo-sharing app can be used to target advertising on a social network, and the ecosystem created through a mobile operating system can help sell hardware.

Elsewhere, Reporters Without Borders’ 2022 World Press Freedom Index: A new era of polarisation identifies how false news and deliberate disinformation are continuing to debilitate democratic debate:

“Within democratic societies, divisions are growing as a result of the spread of opinion media following the ‘Fox News model’ and the spread of disinformation circuits that are amplified by the way social media functions. At the international level, democracies are being weakened by the asymmetry between open societies and despotic regimes that control their media and online platforms while waging propaganda wars against democracies. Polarisation on these two levels is fuelling increased tension.”

The methodology used to draw up the Index defines press freedom as “the effective possibility for journalists, as individuals and as groups, to select, produce and disseminate news and information in the public interest, independently from political, economic, legal and social interference, and without threats to their physical and mental safety.”

In this context, the role of social media platforms as propaganda tools needs to be explored thoroughly if freedom of the press is to remain a bastion of democracy, especially in a world that increasingly relies on digital technologies underpinned by Artificial Intelligence (AI).

It is a world where AI is shaping contemporary politics, where public authorities use AI to automate the allocation of public services, where judges use risk-assessment algorithms to determine a person’s eligibility for bail or parole, where political actors use AI and social media platforms to engage in microtargeting and misinformation, and where law enforcement agencies use facial recognition systems and predictive analytics to improve surveillance.

Despite all this, AI has the potential to enhance democracy by enabling a deeper understanding of societal issues as well as helping to develop more effective policy tools and actions. On the other hand, when abused, AI helps to reinforce existing inequalities and biases, to increase polarization and ultimately to undermine not only democratic systems but also the preconditions that enable democracy to flourish.

On 15 April 2021, members of the European Union’s Special Committee on Artificial Intelligence in a Digital Age (AIDA) heard two panel discussions on the topics of AI and the future of democracy, and on tech developments and regulatory approaches to disinformation. At the start, Romanian politician and AIDA Chair Dragoș Tudorache observed:

“ At the dawn of the digital age, we must set in place rules, worldwide, which will ensure AI will not be used to undermine democracy. First, we need to look inward, and ensure that we do not allow the use of AI for undemocratic practices such as mass surveillance, mass social scoring by the state, or discrimination in Europe. Second, we must reach out to the world’s democracies and work together to build an alliance of digital democracies strong enough to set the rules, standards, and red lines of a democratic digital future, worldwide. Third, we need to ensure that we are protected – by strengthening our cybersecurity, increasing our own citizens’ resilience to fake news and disinformation through education, and developing cutting-edge tools to counter cutting-edge at- tacks. Last but not least, we need to understand that AI-powered attacks on democracy can be even more devastating than conventional attacks and we must treat them as such. This needs to be reflected in our defence policy, in our cooperation with and participation in NATO, in our transatlantic alliance, and in our global strategy.”1

The convergence of digital platforms and AI poses new challenges to communication rights that need to be identified, systematized, and independently regulated to prevent a global system from emerging that is entirely dominated and controlled by corporate interests and despotic regimes. Time is running out.

Note

1. AIDA Working Paper on “AI and the Future of Democracy”. June 2021. European Parliament.

No Comments

Sorry, the comment form is closed at this time.