Digital platforms versus democratic political discourse: Challenges and the way forward
59519
post-template-default,single,single-post,postid-59519,single-format-standard,bridge-core-3.3.1,qodef-qi--no-touch,qi-addons-for-elementor-1.8.2,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-smooth-scroll-enabled,qode-child-theme-ver-1.0.0,qode-theme-ver-30.8.3,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-8.0.1,vc_responsive,elementor-default,elementor-kit-41156

Digital platforms versus democratic political discourse: Challenges and the way forward

Seán Ó Siochrú and Anita Gurumurthy

At the turn of this century, many in civil society believed we were witnessing the dawn of a new era for the public sphere, one where the internet could realise the promise of a communication space where all voices could be heard equally, where the dominance of centralised and commercial media could be countered through a public agora open to all.

Instead, the neo-liberal paradigm soon engulfed the still-nascent internet, slowly but inexorably extinguishing such hopes, to a point where talk is now of a “post-public sphere” in which even the concept of the truth is in question, and where political discourse is ever more distant from the “real”. Dominant social platforms pursue business models based on data-mining, surveillance and behavioural nudging, displacing social interaction with algorithm-driven digital mediation that replaces shared memories and collective experience with constant attention seeking and individual gratification.

One of the core features of digital platforms is their capacity to disseminate misinformation and disinformation very rapidly across huge swathes of the population. Before the platforms emerged, commercial media monopolies, counteracted only to a degree by public service, non-profit and community-based media, had always profited from publishing sensational material. Corporate owners, sensitive to advertisers and their own interests, also steered clear of criticising capitalism and or pointing to its inequities.

The main social media platforms today do the same, but push it to an entirely new level. First, smartphones mean they secure attention at a much younger age. Second, they generate individual consumer profiles, thus sustaining – for commercial gain – their attention for much longer periods with a constant bombardment of tailored consumer products and ideology; and in the meantime, dragging them down “rabbit holes” of false information and into echo chambers and filter bubbles. The dawn of AI heralds a whole new set of possibilities to generate “verisimilitude”, and even for the forgery of “scientific” datasets.1 Third, by claiming not to publish content per se, but merely to enable others to circulate it, they bypass traditional institutional content regulation (albeit light regulation in the case of newspapers).

This much we know; it is barely contested anymore. Negative consequences for political discourse in many countries are widely acknowledged, as leaders are elected based on divisive and false claims targeting a population already suffering under the real-life onslaught of neo-liberal attacks on wages and job security and weary of information overload and of trying to interpret causes and solutions.

Current responses

So, let’s get positive, and ask what can and must be done.

A first positive note is that the situation has become so blatant, that even mainstream economists and politicians can now talk about the “negative externalities” associated with the rapid and widespread dissemination of mis/disinformation and hate speech that go hand in hand with the digital platforms’ business models, and how these negatively impact trust in public institutions and generate political and economic instability, radicalisation and extremism. Securing “information integrity”2 is the terminology being used. Numerous countries have enacted, or are enacting, legislation to tackle various elements of this – though with little effective experience internationally to guide them.

Measures include attempts, notably in Canada and Australia, to channel some of the enormous profits of digital platforms, indirectly generated through users accessing media content, back to mainstream media providers. The EU is also a leading actor, the Digital Service Act (2022) obliging member states to establish a new regulator/commissioner for safety on the internet, and to ensure that very large platforms strengthen due diligence measures regarding illegal content and the deliberate manipulation of their services to achieve illegitimate goals. Institutional means to identify and remove problematic content can also be deployed.

In addressing these problems, a key challenge is to ensure that individual freedom of expression is protected particularly against state power, while at the same time addressing how the space for political expression is being constricted and distorted by the digital platform’s market power. This is especially acute for many developing countries, since significant expertise and resources are required to develop the legislation and implement it. Countries tending towards authoritarianism face the very real danger that regulation and laws purporting to control these platforms will in fact be utilised to shut down dissenting voices, a real risk in for instance India.3

Nevertheless, a broad consensus appears to be emerging among many governments that tackling misinformation, disinformation and hate speech will demand new and more robust approaches to regulation, and that voluntary or self-regulatory approaches simply do not work. It is no coincidence that Brazil, in leading the G20 process for the first half of 2024, is including information integrity among its priorities in the digital sphere.

Why existing responses are inadequate

Yet the process of building a vibrant public sphere faces even bigger challenges.

Even if the worst excesses of digital platforms can be curbed, the current configuration of internet and digital platform-based media and communication cannot begin to live up to early hopes for the internet and for the creation of a deeper and more democratic public sphere. Centralised attempts to control the innate tendencies of platform algorithms that are designed to maximise profits, even where they achieve their core goals, cannot in themselves generate a dynamic that will reverse the notion that “truth” is a chimera – the profound epistemological question about the nature of truth and knowledge itself – let alone create the space and dynamic for a new information eco-system based on human rights, trust and credibility and capable of addressing diverse public interests.

Though the digital revolution introduced some novel twists, reigning in, in the above respects, of platform control, though absolutely necessary, would see the resurfacing of the same challenges faced previously by civil society: profit driven media with a vested interest in promoting consumerism and perpetuating the wider economic and social structures, insufficiently counter-balanced by public service media and emergent community-owned and independent non-profit media.

Thus, enhancing, even securing, “information integrity” of the major digital platforms’ content by no means addresses the deeper issues involved in building a public sphere and political discourse in the digital age. Which leads to a further point.

At the risk of inducing pessimism at the sheer scale of challenges, there exists a deeper problem with the major digital platforms, one that impacts indirectly but powerfully on efforts to build a vibrant and diverse public sphere.

Attempts so far to tackle the power of the digital platforms tend not to address the core business model of these platforms. The latter will continue to rely on behavioural surveillance and profiling, maximising attention capture through psychological manipulation, from an early age and constant targeted advertising. Though the area needs further research, this is likely to have a significant impact on the capacity of people to engage meaningfully with ongoing political and wider social discourse and tends to reinforce a self-identity oriented primarily towards market consumption. This, arguably, leaves people more vulnerable to misinformation and disinformation, and poorly equipped to engage in critical thinking. Therefore, also needed are measures to tackle this core business model, and to extract people, especially young people, from the grip of the consumerist mindset, and to offer a wider range of identities and life incentives.

In the end, political discourse and the dominant economic paradigm, and indeed cultural expression, are closely intertwined in society, and building a space largely free from market and consumerist forces is likely to be a precondition to widely accessible and genuine democratic communicative space.

Levels of challenge

The challenge must be tackled at several levels, conceptually in terms of the language deployed and practically in policy and regulation from international to local levels, which can also enable people and communities to engage in and build democratic spaces and media.

Developing a conceptual framework

At the conceptual level, if the intention is to go beyond ameliorating some of the more egregious “negative externalities” of digital platforms, then the focus has to switch to the wider media and communication eco-system. The concept of information integrity, for instance, points legitimately to key concerns of accuracy, authenticity, source and so forth, related to an individual item of content. But this is detached from the wider eco-system that enabled its production, valorisation (both financially and substantively), sharing, filtering, access and consumption. These processes are located in, and influenced by specific economic, social, cultural and institutional contexts, that enable content to be produced and stamp specific features on it. On its own, information integrity tells us very little about these wider structures. If the problems of political discourse are to be meaningfully addressed, they must encompass this wider context.

There already exist several conceptual frameworks capable of incorporating this broad approach. For instance, the idea of communication rights, as distinct from freedom of expression, offers a more holistic rights-based eco-system, tracing all stages in society’s cycle of communication.4 From this perspective the term ‘communication integrity’, as distinct from information integrity, may go some way towards capturing the wider institutional and systemic context and dynamic.

The shaded boxes above broadly encapsulate freedom of expression, but the others (those in brackets are associated with duty-bearers, individual or institutional) embrace the entire eco-system of communication in a dynamic manner, to potentially complete a virtuous cycle of communication that enriches political discourse. Amartya Sen’s work might offer a complementary source of inspiration here. He argues that citizens reshape democracy through processes of public reasoning, underscoring the significance of unhindered communication, critical scrutiny, human security and value formation.5

Thus, an appropriate framework can offer overall guidance to building a set of concepts and ideas that can capture the complexity of media and communication in world largely dominated by digital platforms, and point to possible futures.

Policy and regulation

Effective policy and regulatory solutions will have to go further than those currently being developed or under discussion by government and institutions. Designing and implementing measures to ensure information integrity, while preserving freedom of expression, are of course important, including fact-checking by transparent public institutions.

But extending these to encompass the full cycle of social communication will require a lot more. The EU Digital Markets Act (2022) takes initial steps towards tackling the power of the major digital platforms by identifying and imposing obligations on “gatekeeper” service providers,6 though the focus is primarily on enhancing competition; and by setting in place certain institutional safeguard mechanisms. The DMA can also mandate interoperability and data portability between core digital services and platforms, enabling users to migrate more easily to emerging platforms, including non-profit decentralised digital content platforms. These measures could, if vigorously enforced, have far-reaching long term consequences for the diversity of digital media.

Platform algorithms, largely untouched by proposed and actual regulation, need also to be subject to public scrutiny, and could be required, for instance, to forego the simplistic but fiercely defended notion of “relevance” (which catches and retains attention), and to ensure that algorithms promote content that is diverse, challenging, important and serendipitous. Some countries, such as Ireland, are also committed to providing direct financial support to non-digital media, including community media, to engage with the digital age, and potentially to guarantee the visibility and accessibility of alternatives on the major digital media platforms.

Measures to enable public service,  non-profit and community-owned media, to prosper are also essential to building a public space for critical media capable of enriching public debate. New, decentralised, non-profit business models are needed, with strong public support.

Yet even all of these measured combined are unlikely to be enough to tackle the depth of the problems and to confront the massive economic and political power of the dominant digital platform corporations. More radical measures may be required, including for instance the following:

A total prohibition on surveillance-based advertising, unyoking people from their status as data-generators to maximise profits and arresting the bombardment of targeted advertising in every facet of their lives.

At the level of service infrastructure, a complete structural separation of communication, including media, services from other forms of data-supported services, such as commerce or services providers.

Major public support for the development of decentralised platform architectures and non-profit business models for media, particularly those emerging from civil society and communities and in developing countries.

Ultimately, a central enabling component could be the creation of a new UN body, established to take forward the inclusive agenda of the WSIS stemming from 2003/2005, moving towards democratic governance of digital society. The pressing challenge here is to move beyond the corporate-dominated multi-stakeholder models that have emerged since then, and to ensure multi-scalar, global consultation processes, guaranteeing that bottom-up voices and assemblies have a critical and powerful role.

Notes

1. For this emerging possibility see https://www.nature.com/articles/d41586-023-03635-w

2. Defined by the UN Secretary General Policy Brief as accuracy, consistency and reliability of information. Misinformation, disinformation and hate speech as identified as major threats to it. Our Common Agenda: Policy Brief No. 8. https://www.un.org/sites/un2.un.org/files/our-common-agenda-policy-brief-information-integrity-en.pdf

3. There is always a risk that the concept of Information integrity can, when implemented in practice, mutate into the concept of ‘information security’, with strong overtones of centralised and illegitimate state control over permissible digital content. Nevertheless, the argument here is that the risks posed by platforms have become so compelling and threatening to (already fragile) democratic institutions, that many governments are willing to put considerable effort into finding solutions that minimise the possibility of such mutation.

4. See https://waccglobal.org/wp-content/uploads/2020/07/Assessing-Communication-Rights.pdf

5. See for instance his influential work The Idea of Justice (2009).

6. In September 2023, these were named by the European Commission as Alphabet, Amazon, Apple, ByteDance, Meta and Microsoft, and a further group were designated as Core Platform Providers. https://digital-markets-act.ec.europa.eu/commission-designates-six-gatekeepers-under-digital-markets-act-2023-09-06_en

Seán Ó Siochrú is a sociologist with Master’s degrees from McMaster University Canada (1981) and University College Cork (1983). He has over 30 years’ experience, dividing his work between Ireland and over 50 countries worldwide, in evaluation, programme and strategy management and design, and capacity building. He has written and edited several books, many articles and papers; and is an advocate for communication rights.

Anita Gurumurthy is a founding member and executive director of IT for Change where she leads research and advocacy on data and AI governance, platform regulation, and feminist frameworks on digital justice. She contributes regularly to academic and media spaces and serves as an expert on various bodies including the United Nations Secretary-General’s 10-Member Group on Technology Facilitation, Council of the Platform Cooperativism Consortium at The New School, New York, and has been on the Paris Peace Forum’s working group on algorithmic governance.

No Comments

Sorry, the comment form is closed at this time.