MD 2024/1 Editorial
59529
post-template-default,single,single-post,postid-59529,single-format-standard,theme-bridge,bridge-core-3.1.7,woocommerce-no-js,qodef-qi--no-touch,qi-addons-for-elementor-1.7.1,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,columns-4,qode-child-theme-ver-1.0.0,qode-theme-ver-30.4.2,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-7.6,vc_responsive,elementor-default,elementor-kit-41156

MD 2024/1 Editorial

Over one hundred years of sci-fi books, magazines, TV programmes, and films have made it easier (as Dr Who fans will testify) to believe that aliens from outer space are intent on taking over Earth. In 1938, courtesy of a notorious radio broadcast of “The War of the Worlds”, they were Martians. Today, they are supercomputers imbued with Artificial Intelligence: ailiens from inner space.

Many writers had explored the idea of machines taking over aspects of society before Isaac Asimov published his sci-fi short story “The Inevitable Conflict” in 1950. In it, Earth is divided into four geographical regions, each with a powerful supercomputer known as a Machine that manages its economy. The machines conspire to take control of humanity’s destiny.

A more recent example of this storyline can be found in Rehoboam (Westworld, 2020), a quantum AI computer system whose main function is to impose order on human affairs by careful manipulation and prediction of the future through analysis of a vast dataset collected by a global corporation.

Small wonder that the line between humans and robots (including computers like Hal 9000 in Arthur C. Clarke’s Space Odyssey series) has been blurred, with the result that aliens tend to have human features, and ailiens are made to “think” like humans. Siri and Alexa are family friends ready to serve and entertain.

In their textbook Artificial Intelligence: A Modern Approach (1995; 4th ed. 2020), Stuart Russell and Peter Norvig offer four potential goals or definitions of AI, differentiating computer systems on the basis of a human and an ideal approach. The human approach demands systems that think or act like humans; while the ideal approach demands systems that think or act rationally. It is the tension between the human and the ideal that raises numerous almost intractable questions of ethics. Who is responsible for the actions of machines?

In Greek tragedy, actors playing gods entered the stage from above lowered by a crane or from below through a trapdoor, hence the Latin deus ex machina –a person or event that is introduced into a situation suddenly and unexpectedly, providing a contrived solution to an apparently insoluble difficulty.

Many later playwrights and authors used this device to resolve a conundrum, and some 20th century philosophers used the expression to describe the concept of the mind existing alongside and separate from the body as a ghost in the machine – also explored by Isaac Asimov in his collection of sci-fi short stories “I Robot” (1950). Particle physicists searching for neutrinos and antineutrinos thought of them as ghosts and theologians have long speculated about the intervention of God in human affairs.

No wonder, then, that people imagine that AI machines embody a sentient being, when all they really do is join up the dots at an exponential rate based on an exponential amount of data. What is of genuine concern, however, is the uses to which AI machines are put and their inevitable impact on human society. When the European Union’s Panel for the Future of Science and Technology studied AI ethics (2020), it concluded:

“The current frameworks address the major ethical concerns and make recommendations for governments to manage them, but notable gaps exist. These include environmental impacts, including increased energy consumption associated with AI data processing and manufacture, and inequality arising from unequal distribution of benefits and potential exploitation of workers… It will be important for future iterations of these frameworks to address these and other gaps in order to adequately prepare for the full implications of an AI future. In addition, to clarify the issue of responsibility pertaining to AI behaviour, moral and legislative frameworks will require updating alongside the development of the technology itself.”1

Image above courtesy of Pixabay. File made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Source: https://pixabay.com/es/illustrations/inteligencia-artificial-cerebro-3382507/


Many actors in civil society are concerned that digital technologies, including those based on AI, can be appropriated by governments, security services, and global corporations to repress, control, manipulate, and profit from ordinary people – who have their own expectations of how these technologies might improve lives and livelihoods. Viable alternatives to “more of the same” are urgently needed as the NGO IT for Change, based in Bengaluru, India, urges in this issue of Media Development:

“A just and equitable AI paradigm hinges on the radical restructuring of the global regime of knowledge, innovation, and development. This requires a structural justice approach to AI governance that is able to articulate the pathways for multi-scalar institutional transformation.”

Fortunately, the European Union seems to be ahead of the game. In December 2023, the European Parliament and EU member states agreed on the parameters for the world’s first comprehensive laws to regulate AI. The laws will not come into force until 2025 at the earliest. However, they will govern social media and search engines, including giants such as X, TikTok, and Google, and they will be based on a tiered system in which the highest level of regulation will apply to those machines that pose the highest risk to health, safety, and human rights.

In terms of communicative justice, the digital era needs “Societies in which everyone can freely create, access, utilise, share and disseminate information and knowledge, so that individuals, communities and peoples are empowered to improve their quality of life and to achieve their full potential.”2 To that end, digital media literacy is crucial, since demystifying how digital technologies and AI work – and how they are controlled or manipulated – will mean greater awareness of the possible dangers and pitfalls. As Jim McDonnell points out in his article:

“Like all technological developments, the current AI wave does not determine the future. Human beings can and will invent creative alternatives. But they need to be much better informed about how the technology works and much more wary about the risks they face online. In short, there is a huge effort required to promote digital literacy and digital rights. This means involving citizens in the wider public discourse about how technologies like AI could be shaped and regulated for the wider public good.”

Notes

1. The ethics of artificial intelligence: Issues and initiatives. Panel for the Future of Science and Technology. European Parliamentary Research Service (March 2020).

2. From “Shaping information societies for human needs”. WSIS Civil Society Declaration (2003).

This issue of Media Development has been produced in collaboration with IT for Change, which aims for a society in which digital technologies contribute to human rights, social justice and equity. Its work in the areas of education, gender, governance, community informatics and internet/digital policies pushes the boundaries of existing vocabulary and practice, exploring new development and social change frameworks. Network building is key. IT for Change is in Special Consultative Status with the Economic and Social Council of the United Nations.

No Comments

Sorry, the comment form is closed at this time.