14 Feb 2022 Impacts of digital transformation on communities and societies
When did you discover your communication rights? Your right to get transparent and trustworthy information, your right to read the texts that are meaningful to you and to share your own thoughts with others or to make them public?
I can tell you when I discovered my communication rights. By the age of 16, I had learned to touch-type at high speed. Typewriters were worth their weight in gold in East Germany during the 1980s. The Iron Curtain was still in place and information access, distribution, let alone a free press, were completely restricted by censorship.
But I typed with a vengeance, copying books and magazines that had been smuggled into the East. Books by Dorothee Sölle, books about the Stalin Era, books written by dissidents. My parish taught me freedom of speech. It was the only space for open-mindedness. Illegally, I took back the communication rights that I had been denied.
Today, I work for the Heinrich Böll Foundation, which has more than 30 oﬃces around the world. The foundation focuses on supporting and working with people in their struggle for freedom and rights. Political lobbying for those whose rights are being violated and whose voices have been silenced is a key part of our mission.
It is remarkable that within WACC communication was defined very early on as a human right: a radical idea whose actual implementation and enforcement – as I see it – has yet to be realized, but which will be of central importance in the age of digital communications. This underscores the importance and the changing role of WACC as a network with its ideas, impulses, prescience, and its special focus on vulnerable groups.
We must already note here that the relationship between freedom of religion and communication rights is by no means easy, and that it also requires debate and conviction within the Christian spectrum. My vantage point today is a political one: What are the most pressing issues in the digital world regarding social rights and human dignity at the moment?
Advantages of digital
During the pandemic we all learned to appreciate the advantages of digital communications. We even celebrated the Easter service online. We stayed in touch with our loved ones via Zoom and Skype; some of us were able to do our work completely online and the home oﬃce will stay as a natural part of our professional lives. Traﬃc in the air and on the streets decreased, so there was even some relief for the climate.
But at the same time, we saw governments using the pandemic to install new surveillance apps, pretending to combat the pandemic. The World Health Organization spoke of an infodemic, ubiquitous disinformation and fake news – massively ampliﬁed by social media and by so many people’s access to both information and disinformation.
We also saw digital companies that raced like a rocket to the top of ﬁnancial indices, whereas democracy indices marked a decline in media independence and ability to function. “The broad scale attack on the media as an independent actor and provider of information critical to the functioning of any democracy is intensifying” (V-DEM Report 2020, p. 25).
Such ambiguity generally accompanies our view of the digital sphere, not only when it comes to mis- and disinformation. However, the less educated, the vulnerable groups, people in rural areas are much more exposed to this kind of infodemic of manipulated media than others. Their communication rights – which include access to diversity and truthfulness of information – are violated.
This leads me to my most important thesis: It’s not just social coherence but human dignity itself that is at stake if we do not take up the ﬁght for freedom and dignity in the digital sphere and scale it up to our oﬄine-eﬀorts.
Technical access to digital communications might be still a factor, but there is more: we must succeed in establishing ground rules for communication in the digital public sphere that enable minorities and vulnerable groups to exchange views and make themselves heard.
The same goes for mechanisms to counter fake news and prioritize true empathy over instant emotions and a culture of indignation that quickly descend into violence. The forces of democracy and public welfare must stand together in the ﬁght to build credibility and trust in the digital media world.
Digital participation is not a luxury or merely nice to have, but a prerequisite for the development of inclusive societies. Free access to information and unhindered opportunities to disseminate it form the backbone of democratic, open and prosperous societies.
In most democratic constitutions, freedom and civil rights are protected. At the same time legal awareness that these rights must also apply in the digital sphere is not very marked. Indeed, it is sometimes completely lacking.
Hence, we have to fight for digital rights themselves as well as for awareness of communication rights in the digital space. In this respect, I would like to indicate three major challenges and conclude with ideas for necessary steps towards just digitalization.
Surveillance and humiliating control versus informational self-determination and dignity
The American scientist Shoshanna Zuboﬀ has presented a sociological analysis of the digital era, which has become an epoch-deﬁning international bestseller: “The Age of Surveillance- Capitalism”. Some experts and activists have urged us to read it as an act of digital self-defence.
Zuboﬀ describes “how global tech companies such as Google and Facebook persuaded us to give up our privacy for the sake of convenience; how personal data has been used by others not only to predict our behaviour but also to inﬂuence and modify it; and how this has disastrous consequences for democracy and freedom.” Zuboﬀ deﬁnes this as “expropriation of critical human rights”. Data streams are being increasingly used for surveillance and control.1
Think of dating apps, where it is possible to swipe people away with a ﬁnger; think of health apps, where sensitive health data are delivered freely to companies. They may sell them on to health insurance companies.
Do you always know why and when a contact or information is visible for you on Facebook? Have you ever asked yourself what kind of knowledge about your personal life or the life of your community Facebook has collected? And have you ever asked Facebook to delete something?
Most of us do not. And so the companies become more and more intrusive. From the past we know how censorship works, but we know less about the manipulation of our emotions and behaviour. Zuboﬀ says, “The age of surveillance capitalism is a titanic struggle between capital and each one of us. It is a direct intervention into free will, an assault on human autonomy.”
Meanwhile, non-democratic governments, not just China, have learned their lessons. “Non- democratic regimes have increasingly moved beyond merely suppressing online discourse, and are shifting toward proactively subverting and co-opting social media for their own purposes. Namely, social media are increasingly being used to undermine the opposition, to shape the contours of public discussion, and to cheaply gather information about falsiﬁed public preferences.”
Regimes have frequently mobilized their supporters to shape the content of online conversations. “Such assistance is particularly important in hybrid regimes like Russia, which do not engage in the direct blocking of websites and focus not on denying access but on successfully competing with potential threats through eﬀective counter-information campaigns that overwhelm, discredit, or demoralize opponents.
These include techniques like mobilizing regime supporters to disrupt planned rallies, plant false information, monitor opposition websites, and harass opposition members. Allegations of ‘web brigades’, in which Russian commenters were paid to post pro-regime comments and discredit the opposition, ﬁrst appeared over a decade ago. These organized groups were alleged to frequent popular pro-democracy forums to shape the public consciousness.”
We have to bear in mind that social media are not a safe space for human rights or environmental activists.
Let’s take a look at a recent example: Afghanistan. “Services like WhatsApp have been helpful in evacuating Afghans, but they can also make those individuals identiﬁable targets. The Taliban’s own presence on social media also raises questions about the platforms’ obligations. The Taliban established a Twitter presence in 2011 and has maintained WhatsApp and Telegram accounts since 2015. Since then, the group has been waging an Internet campaign, sharing its stories on social media and relying on clever propaganda, appealing to far-right groups around the world.
In theory, the Taliban are not welcome on these platforms. They were classiﬁed as a dangerous organization by both Facebook and YouTube some time ago. Twitter, on the other hand, has not imposed a blanket ban on the group. In practice, banning Taliban content is not that simple.”
This rapid evolution of government social media strategies has critical consequences not only for the future of electoral democracy and state-society relations, but aﬀects trust in information and communication generally.2
Let us be hopeful: There is more and more consensus on the urgency of regulating intermediaries with regard to the democratic public sphere and their inﬂuence on opinion-forming processes in society. There is open debate about a regulatory framework comparable to media legislation. Such a framework would automatically have repercussions on the fundamental rights of users.
Discrimination versus equality and social justice, fairness, and participation
Another prerequisite of democracy is equality. In a diverse society, equal treatment must be fought for over and over and against every sign of possible discrimination or privilege: from education to the job market or the search for housing. Equality means equal, fear-free participation and access to public goods, spaces, and networks.
However, the business model of algorithm-based selection processes does not imply the principle of equality, but rather the worldview of the coders and the data of the past. Moreover, algorithm-based decision systems are trained according to subjective criteria, which are usually non-transparent. This holds signiﬁcant potential for discriminating against entire populations based on individual characteristics, whether in application processes, the allocation of school places, the assessment of creditworthiness, or in legal decisions (such as in the US judicial system).
Another example is the facial recognition systems used in some countries. Often these cannot correctly identify the faces of black, indigenous, and other people of colour or women, but some are used in law enforcement, prosecution, or prevention. Many examples of discrimination through decision algorithms come from the US. For Germany, in an atlas of automation the NGO AlgorithmWatch has shown in which areas decisions are also made automatically, from personnel management and unemployment administration to voice recognition of asylum seekers and predictive policing.
AlgorithmWatch has developed recommendations for action, ranging from the principle of “do no harm” to the demand for traceability of decisions and eﬀective supervision of private-sector and government applications. This catalogue is an important contribution to sharpening and strengthening legal awareness in the digital space.
Hate speech and digital violence versus media freedom
The algorithms of intermediaries such as Facebook and Twitter multiply hate speech and ensure the rapid spread of disinformation. The disparagement of serious media and science and the creation of impenetrable information bubbles distort the open opinion-forming process, damage the democratic public sphere, and have also been proven to incite physical violence.
During the Covid-19 pandemic, the “infodemic” (mentioned earlier) of disinformation, conspiracy theories, and hostility to science, was life-threatening because scientiﬁcally veriﬁed information to protect health no longer reached certain segments of the population. Depending on the strength of the public media, these are diﬀerent-sized groups of people in diﬀerent countries.
When it comes to hate speech, on the one hand there are perpetrators who claim freedom of expression for themselves, and on the other victims who suﬀer intimidation and bullying, whose personal privacy is no longer protected, preventing them from freely developing their personalities. A whole series of scientiﬁc studies proves the endangering eﬀects, including self-censorship and consequent psychological damage for those aﬀected, changes in the implicit attitudes and opinions of uninvolved or involved users. Women in particular are more likely to be victims of hate speech and digital violence.
The task for the courts is to develop case law that is commensurate with the risk, which itself is massively increased by the sheer reach of the new media. Insults in the public sphere cannot be compared to insults in the digital sphere, which can be multiplied thousands of times in a global communications network. Existing standards, under which hate speech can be prosecuted in criminal law only in cases of an explicit insult and physical, direct threat of violence, are not suﬃcient for acts committed on the Net.
To date, hate speech cannot be prosecuted adequately unless it explicitly calls for violence against an individual – this way, most hate speech remains undetected and unpunished. While hate speech happens on global platforms, victims and perpetrators can only be prosecuted/protected by national laws.
What do we need to do?
The forces of democracy and public welfare must stand together in the ﬁght to build credibility and trust in the digital media world. Digital participation is not a luxury or merely nice to have, but a prerequisite for the development of inclusive societies.
Free access to information and unhindered opportunities to disseminate it form the backbone of democratic, open and prosperous societies. Civil society, churches included, have to be involved in ﬁnding what it means to have privacy, self-determination, security and ensure equality and justice in the digital space.
Some important requirements for political regulation are the following:
- We need global regulations to restrict the inﬂuence of internet platforms – the EU has initiated such laws with the Digital Services Act and Digital Markets Act.
- We need Data Protection like the GDPR in the EU on a global scale – global regulations and ethical standards and norms – alliances across the Atlantic are necessary. Right now they include a Code of Practice on Disinformation among digital platforms.
- Data protection is less about protecting data and more about protecting the dignity of human beings.
- We need controlling mechanisms for the export of software that might be used for mass surveillance in authoritarian countries.
- We need an Alliance of “techno-democracies” for joint resources on critical digital infrastructure that might be more independent.
- We need “Pluralismuspﬂicht” [a duty of pluralism] – an obligation to secure pluralism on all big social media networks to guarantee an equal and just space to hear and to be heard.
- We need transparency of algorithms and micro-targeting measures.
Another important and major topic is digital education. With educational oﬀerings, a new awareness of defensive rights must be developed in schools, in associations, through political foundations and civil society, not only against the state, but also against private data collectors and users.
Political education includes giving people the tools to use the Internet in both directions. On the one hand, knowledge of one’s own rights in the digital space must be strengthened, because awareness of rights violations and knowledge of dangers protect civil liberties. On the other hand, political education also includes the know-how to use the Internet better for one’s own beneﬁt. Because it is only with these skills that digital education becomes an instrument for digital participation. Of course, this includes many more aspects, because the barriers to participation are exacerbated or multiplied in the digital world.
During the Covid-19 pandemic, lack of or inadequate access to the Internet had a very concrete impact on children’s educational opportunities when digital learning simply could not be done due to a lack of stable access to the Internet. The fundamental right to the free development of the personality is violated here from the outset.
Another essential factor is the participation of citizens in political decision-making. This includes broad public debate about security and freedom on the Internet, about personal privacy, freedom of expression and much more. These debates, as conﬂictual as they may be, raise awareness of individual fundamental rights in the digitalized world.
It has been proven that sustained and meaningful political participation strengthens democracy. The more transparent and participatory politicians and administrators are in dealing with the data they collect, with political responses to safeguarding fundamental rights online, with the increase in communication and networking, and with the diverse information options, the more they strengthen the sovereignty of their citizens.
Fundamental rights will not prevail on their own or through voluntary commitments by corporations. That is why the broad support and joint commitment of (civil) society, faith-based organizations, politics, science, and business are needed to guarantee and protect civil rights in the digital age as well to make the digital space usable for the common good.
1. The Guardian, Interview Shoshana Zuboﬀ: Surveillance capitalism is an assault on human autonomy, Joanna Kavenna. Fr, 4 Oct 2019.
2. Seva Gunitsky, Corrupting the Cyber-Commons: Social Media as a Tool of Autocratic Stability.
Internet freedom worldwide: https://www.nytimes.com/2014/03/12/opinion/the-future-of-internet-freedom.html
Informationelle Selbstbestimmung: https://www.bfdi.bund.de/DE/Datenschutz/ Ueberblick/Was_ist_Datenschutz/Artikel/InformationelleSelbstbestimmung.html
Micro-targeting / Cambridge Analytica: https://netzpolitik.org/2018/cambridge-analytica-was-wir-ueber-das-groesste-datenleck-in-der-geschichte-von-facebook- wissen/
Discrimination of algorithms:
Global cases of hate speech / physical violence: https://hatebase.org/news/2019/11/18/does-online-hate-speech-cause-violence
Critique of Network Enforcement Law: https://www.bmjv.de/SharedDocs/ Gesetzgebungsverfahren/DE/NetzDGAendG.html, https://www.deutschlandfunk.de/netzwerkdurchsetzungsgesetz-netzdg-mehr-kontrolle-und.2907.de.html? dram:article_id=494955
Overview of regulation against misinformation worldwide: https://www.poynter.org/ifcn/anti-misinformation-actions/
Impact of disinformation on democracy: https://www.europarl.europa.eu/RegData/etudes/STUD/2021/653635/EXPO_STU(2021)653635_EN.pdf
Since July 2017, Ellen Ueberschär, along with Barbara Unmüßig, has been one of the presidents of the Heinrich Böll Foundation. As president of the foundation, she is responsible for the foundation’s domestic policy department, for foreign and security policy, and for the European Union/North America department. In addition, she is in charge of the scholarship department, the Green Academy – a think tank of scholars and politicians –as well as the research archive “Grünes Gedächtnis” (Green Memory Archive). Between 2006 and 2017, she was the secretary general of the German Protestant Church Assembly (Deutscher Evangelischer Kirchentag). In this function, she coordinated six Protestant Church Congresses and one ecumenical.
Photo: 13 September 2021, Berlin, Germany. International symposium on Social Justice in a Digital Age, co-organised by the World Council of Churches and the World Association for Christian Communication. Left: Dr Ellen Ueberschär. PhotoByAlbinHillert_AH2_0256.jpg