13 Aug 2025 Media Development 2025/3 Editorial
The term Artificial Intelligence (AI) encompasses three variations. There is Artificial Narrow Intelligence (ANI) with limited capabilities, e.g. Google translate and Siri. There is Artificial General Intelligence (AGI), which attempts to replicate human capabilities, e.g. chatbots. And then there is Artificial Superintelligence (ASI): machines that are more capable than humans, of benefit to healthcare, scientific research, and the military. Such machines are either the solution or the problem – depending on your point of view.
On the positive side, AI can assist with data analysis, brainstorming, drafting and proofreading of texts. It can help generate social media posts, structure workshops, turn complex descriptions into readable web texts and data into graphics. It can translate and transcribe voice recordings into multiple languages, and offer automated sign language.
On the negative side, AI might sidestep human oversight and become self-aware, leading to unforeseen consequences and even existential risks. The superior cognitive abilities of Artificial Superintelligence could allow it to manipulate systems or even gain control of advanced weapons. Military usages include autonomous warfare systems, strategic decision-making, target recognition, and threat monitoring. Human interventions would be subordinate to “machine thinking”.
Consequently, the most important questions surrounding systems based on AI applications are ethical – in terms of their development, application, and impact. In the words of Gabriela Ramos, UNESCO’s Assistant Director-General for Social and Human Sciences:
“In no other field is the ethical compass more relevant than in artificial intelligence. These general-purpose technologies are re-shaping the way we work, interact, and live… AI technology brings major benefits in many areas, but without the ethical guardrails, it risks reproducing real world biases and discrimination, fuelling divisions and threatening fundamental human rights and freedoms.”
AI is impacting freedom of expression, freedom of information, and public interest journalism in terms of accuracy, authenticity, and trust. At a time when misinformation, disinformation, and fake news bedevil journalism and social media platforms, AI has been seen as a means of dispelling confusion and restoring trust. However, as Julius Endert, senior consultant to the Deutsche Welle (DW) Akademie, points out, if AI is to be used in professional journalism, we need concrete business and editorial decisions that:
“[R]esult in structures and processes grounded in organizational values, ethical guidelines, and policies. This work must be tailored to the size and scope of each organization and developed incrementally. Crucially, the perspectives of all stakeholders – especially regarding data governance, privacy, and transparency – must be included.”
The DW Akademie, which focuses on international media development, journalism training and knowledge transfer, proposes a three-tiered approach to AI governance:
- Ethical foundations: Define ethical reference points and principles as the foundation of an overarching AI strategy. Develop your strategy and guidelines.
- Compliance systems: Establish systems to ensure adherence to legal and other relevant norms.
- Operational implementation: Create and implement responsibilities, processes, and structures according to your AI strategy.
Aside from or in addition to these considerations, there is the issue of public safety. Will these extraordinarily powerful AI systems be subject to oversights that prevent them from behaving in unexpected and potentially catastrophic ways? For decades this has been seen as a military question: who or what decides to carry out nuclear war? But, today, what if machines get to decide who is a refugee? Or deserving of a heart transplant? Or eligible for schooling?
Beyond such immediate concerns, many people are also worried about the long-term impact of AI on social and cultural identity: the way people understand themselves and others, their sociocultural environments, and the way technologies shape and alter human behaviour.
“For as the pace of change increases, not just the economy but the very meaning of ‘being human’ is likely to mutate… Such profound change may well transform the basic structure of life, making discontinuity its most salient feature.”1
Continuity has always been a measure of stability. For good or ill, it has enabled political, cultural and social identities. Continuity itself relies on a certain tension between the public and the private, in what ways information and knowledge are shared or commoditised or even weaponised. Lemi Baruh and Mihaela Popescu’s article in this issue of Media Development underlines the dilemma:
“Privacy as a dynamic process of ‘becoming’ – essential for shaping our identities and fostering autonomy through an ongoing dialogue with our past, present, and future – faces profound challenges in the age of pervasive artificial intelligence… It’s not just about data points being collected; it’s about how AI and algorithms actively intervene in our temporal experience, potentially derailing our capacity to make meaningful choices (and learn how to make choices) that we can claim as our own, thereby threatening our journey of becoming who we aspire to be.”
Artificial Intelligence is here to stay. Technological development never goes backwards, and its impact is always far reaching and unpredictable. What we must do – and urgently – is to think ethically, to act transparently, and to communicate the implications of AI development widely and intelligibly. Only then can AI serve humanity responsibly.
Note
1. Yuval Noah Harari (2018). 21 Lessons for the 21st Century. Signal/Penguin Random House Canada, pp. 269-270.
Image source Deviant Art.
Sorry, the comment form is closed at this time.