Putting virtue into the virtual: Ethics in the infosphere
21254
post-template-default,single,single-post,postid-21254,single-format-standard,bridge-core-3.3.1,qodef-qi--no-touch,qi-addons-for-elementor-1.8.1,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-smooth-scroll-enabled,qode-child-theme-ver-1.0.0,qode-theme-ver-30.8.3,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-8.0.1,vc_responsive,elementor-default,elementor-kit-41156

Putting virtue into the virtual: Ethics in the infosphere


A Rohingya woman, one of more than 600,000 Rohingya fled Myanmar towards the end of 2017, sits in the entrance to her shelter in the Kutupalong refugee camp in Bangladesh.
Photo: Joel Carillet/ACT Alliance


The founder of Facebook, Mark Zuckerberg faced legislators in Washington and Brussels recently to answer charges that it had failed to protect the personal data of 50 million users and allowed fake news and political manipulation to flourish.

Watching Zuckerberg as he attempted to answer the pointed criticisms of his global social network/advertising platform, brought to mind another ‘Z’, the worker ant hero ‘Z’ (voiced by Woody Allen) in the animated film Antz (1998). In a famous scene, he reassures a fretful Princess Bala (Cameron Diaz) that he will rescue her mother. “Don’t worry, I know almost exactly what I’m doing”, he says. The princess looks unconvinced. Zuckerberg’s interrogators did too.

If there was one inescapable conclusion that the watching audience could draw from the often meandering exchanges, it was that elected officials certainly don’t “know almost exactly” what they are doing. They, like the rest of us, are finding it hard to understand and adjust to the seismic shift that is occurring in the configuration of our daily lives. Even Zuckerberg, despite his pivotal position in the social media universe, looked just as unprepared to grapple with the speed of change and the complexity of the realignments that are taking place as the rest of us.

We are moving rapidly and irreversibly into a new world and in the midst of that transition it is hard to discern how best to respond. The worlds of big data and communications are not just converging but have already merged; artificial intelligence (AI) has long left the realms of science fiction. Luciano Floridi, Professor of Philosophy and Ethics of Information and Director of the Digital Ethics Lab at the University of Oxford, contends that “we no longer live online or offline but onlife, that is, we increasingly live in that special space, or infosphere, that is seamlessly analogue and digital, offline and online.” That is to say, the often-made distinction between virtual reality and real life is no longer relevant, if it ever was.

As we struggle to keep our bearings in the midst of the technological whirlwind, Floridi suggests that we need to focus our attention, on a “fundamental question, which is socio-political and truly crucial: what kind of mature information societies do we want to build? What is our human project for the digital age?1

The power of technology

This is not a new question. However, in recent years it receded into the background as consumers, politicians, bankers and the media became blinded by the power of technological behemoths like Apple, Google, Facebook and Amazon. Now, the glamour of social media and the digital future has become tarnished by disputes about tax, fake news and data harvesting scandals.

Though these events evoked outrage and complaints of ethical malpractice, most of the response has been about trying to find governance, regulatory and technical fixes. Much less attention is given to examining the underlying ethical assumptions that shape how these companies operate as they help build the information society. 

Floridi, in his discussion of digital ethics, sets out clearly why discussion of ethics is so central to the task of building a humane info-society. As he says:

“Digital ethics, with its values, principles, choices, recommendations and constraints already influences the world of technology much more than any other force. This is because the evaluation of what is morally good, right or necessary shapes public opinion – hence the socially acceptable or preferable and the politically feasible, and so, ultimately, the legally enforceable, and what agents may or may not do. In the long run, people (as users, consumers, citizens, patients, etc.) are constrained in what they can or cannot do by organizations, e.g. businesses, which are constrained by law, but the latter is shaped and constrained by ethics, which is where people decide in what kind of society they want to live.”

This is why ethical reflection has to form a greater part in the discussions about regulation and self-regulation in the context of information and communication technologies. Take, for example, the recent upsurge in interest at the political level in Europe in the concept of self-regulation as applied both to social media companies and to social network users themselves. The pressure is on the likes of Facebook, Twitter and Google to police themselves and to remove content from their platforms that is deemed illegal of offensive. This is to some extent a way of passing on the costs of enforcement to the business themselves but also, challenges the underlying assumption that Facebook, for example, is a neutral platform.

However, treating a social media platform as a publisher has potentially serious implications for freedom of expression. Self-regulation in this instance raises many questions about accountability and transparency of decision-making and about who has editorial responsibility for what is published online. Such questions should not be set aside as the discussion focuses on technical feasibility or strict legal liability.

Regulation and self-regulation is a necessary but not a sufficient response. Social media providers themselves, advertisers and other commercial users and individuals have to commit to implement what has been agreed. But this commitment depends upon a broad based ethical consensus on what behaviour is acceptable. Managers and organizers of the self-regulated system have to show that they are adhering to an ethos that ensures they will carry out self-regulation on their side that is meaningful and ethical. Those who are taking decisions and managing within the digital media system have to put virtue into the virtual.

In short, the designers, managers and operators of systems need civic and ethical formation. In a recent interview Sean Parker, the founding President of Facebook, admitted that ethical considerations took second place when the social media model was being designed:

“The thought process that went into building these applications, Facebook being the first of them, … was all about: How do we consume as much of your time and conscious attention as possible? And that means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever. And that’s going to get you to contribute more content, and that’s going to get you . . . more likes and comments.”

Parker also admitted that seeing the unintended consequences of this model today he has become “something of a conscientious objector” on social media.2

Media literacy

Self regulation is also now often linked with the need for greater media literacy. The expansion of social media, fake news scandals, cyber bullying, online propaganda, radicalism and privacy and data protection issues have pushed media literacy up the political agenda. Much is made of the need for people to learn to claim their rights and exercise their responsibilities as digital citizens. As part of this process, media literacy is seen as a way for people to learn to self-regulate, using the acquired skills of critical awareness, knowledge and other capabilities.

Now some media educators are realising that these skills are not enough and that a value based ethical formation is also needed. An interesting study in the UK by the think tank Demos, entitled The Moral Web, considers the importance of “educating for character” and forming “digitally virtuous citizens” as part of education for good citizenship and the inculcation of civic virtues among adolescents. The aim, here, is for young people to learn how to act in a virtuous way online, to exercise their moral responsibilities. Demos claims that “digital citizenship is a promising approach to support healthy choices on social media.”3

For a long time reflection on ethics has been rather in the background of discussions around the construction and operation of the infosphere. Always there behind the scenes, doing a great deal of unheralded work but not yet centre stage, not yet the focus of sustained attention. But there are signs that this is changing. In the last few days three media items which explicitly raised ethical questions caught my attention. 

The most prominent was the report of the faked death of the Russian journalist, Arkady Babchenko, in Kviv, Ukraine. In a dramatic way, the story has prompted serious questions about the moral responsibilities of journalists and the commitment to truthfulness in reporting. “The International Federation of Journalists (IFJ) called Babchenko’s hoax murder “intolerable”. The IFJ president, Philippe Leruth complained, “By falsely spreading the news … the Ukrainian authorities have gravely harmed the credibility of information”.4

The second story concerned Google’s partnership with the Pentagon to develop artificial intelligence for analysing drone footage. The affair has generated a petition signed by about 4,000 employees who demanded “a clear policy stating that neither Google nor its contractors will ever build warfare technology.” In response, it “promised employees that it would produce a set of principles to guide its choices in the ethical minefield of defence and intelligence contracting.”5

And the third incident was a letter in the Financial Times from the head of the UK’s Nuffield Foundation, a major funder of research in educational and social policy. Tim Gardam, a former senior media executive, wrote in response to an editorial calling for ethical reflection about the design and use of artificial intelligence and the common good. Gardam makes a strong plea that:

“…Above all, we need to embed ethical thinking in the tech industry, as an inherent part of its culture. There are many in the sector who recognise the urgent need to establish common norms to translate ethical principles into practical decisions, as well as to explore the question of whether the underlying logic of any innovation reflects the values we want in a future society.”6

One of the strengths of WACC is that in its commitment to the principles of communication WACC has developed an ethical values framework which gives a consistent underpinning to its actions and interventions. At the same time, it has both a strong presence in promoting communication rights at the grassroots and a history of advocating for communication rights in different forums. 

So WACC is well placed to explore how the concepts of digital and information rights can mesh with communication rights in terms of governance, regulation, self-regulation and the drive for media (and information) literacy. It can, in its own field, help in many ways, as Gardam says, to translate ethical principles into practical decisions.

In particular, WACC can work with others to try and ensure that the poor and marginalised are not forgotten in the infosphere. What is happening to the people of the peripheries and their communities? Where in the information society will there be accessible places of encounter between all citizens? Who will be recognized as having the right and the opportunity to communicate in the infosphere? What about the communication rights of people with disabilities, with visual and sensorial impairments?

Through its capacity to bring together advocacy, grass roots implementation and ethical reflection – as it did in the world dominated by analogue media – WACC can make its own unique contribution to placing core values, moral behaviour and ethical choices at the heart of public debate around the human future of the digital world. 

In this endeavour it can help bring the vision and aspirations of the Civil Society Declaration issued after the World Summit on the Information Society (WSIS) 2003 closer to a lived reality: 

“We are committed to building information and communication societies that are people-centred, inclusive and equitable. Societies in which everyone can freely create, access, utilise, share and disseminate information and knowledge, so that individuals, communities and peoples are empowered to improve their quality of life and to achieve their full potential.”7 

Notes

1. Luciano Floridi, ‘Soft Ethics and the Governance of the Digital’, Philosophy and Technology. (2018) 31:1–8 https://tinyurl.com/y7b2vqbb

2. Sean Parker unloads on Facebook: “God only knows what it’s doing to our children’s brains”

 https://www.axios.com/sean-parker-unloads-on-facebook-2508036343.html 09/11/17

3. Harrison-Evans, P. and Krasodomski-Jones, A. The Moral Web: Youth Character, Ethics and Behaviour. London: Demos, 2017 https://is.gd/Bt3yoq

4. “Arkady Babchenko’s fake murder: questions that need answering”, Guardian, 31 May 2018 https://tinyurl.com/y9vka7nv

5. “How a Pentagon Contract Became an Identity Crisis for Google”, New York Times, 30 May 2018 https://tinyurl.com/ybtf92xy

6. Gardam, T. Letter,” Financial Times, 30 May 2018, http://tinyurl.com/ybmr4neu

7. Shaping Information Societies for Human Needs. 2003 http://tinyurl.com/y7cp8afm

 

Jim McDonnell (PhD), founder of McDonnell Communications, has over 30 years’ experience in communications  and public relations and is an internationally known trainer, speaker and writer. He is a specialist in reputation and crisis management. On the international level he has acted as Director of Development for SIGNIS, the World Catholic Association for Communication, and he is currently a Director of WACC UK.

No Comments

Sorry, the comment form is closed at this time.