Artificial Intelligence: Setting boundaries, striking balances
63265
post-template-default,single,single-post,postid-63265,single-format-standard,bridge-core-3.3.3,qodef-qi--no-touch,qi-addons-for-elementor-1.8.9,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-smooth-scroll-enabled,qode-child-theme-ver-1.0.0,qode-theme-ver-30.8.5,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-8.2,vc_responsive,elementor-default,elementor-kit-41156

Artificial Intelligence: Setting boundaries, striking balances

Jim McDonnell

A year is a long time in the world of AI. Back in the halcyon days of the Bletchley Park Summit on Internet Safety in 2003, the great and the good of the AI world convened to discuss and debate the future regulation of AI and to out its benefits for humanity. Since those days the number of AI applications has proliferated and the boundaries of what might be achieved by AI systems has expanded so much that the following year two scientists working with the Google search engine DeepMind could win the Nobel Prize for chemistry.1

The rapid spread of AI over the past year or so has been dramatic but perhaps not noticed so much by those who have begun to take AI use for granted. The majority of people working with AI in offices, healthcare, businesses and academia have taken up the use of AI at an unprecedented rate. Voice assistants (e.g., Siri, Alexa, Google Assistant) are estimated to be used by over 4 billion people as of 2024. In addition, the market for specialised AI assistants continues to grow and develop. Among the well-known names are Chat GPT, Microsoft Co-Pilot, Google Gemini, Claude, and Perplexity. These AI assistants show the advances made in natural language processing and integration with a variety of services.

Of course, these developments, like all technological advances, disproportionately work to the advantage of those already open to such technological change. As ever, the take up among the poorest segments of society and unskilled workers in the Global South is skewed by the nature of the relatively unskilled work that they are required to perform for minimum wages.

AI is bad news for the Global South

The coming wave of technology is set to worsen global inequality. That is the stark message which Rachel Adams, the CEO of the Global Centre on AI Governance, and author of The New Empire of AI: The Future of Global Inequality wants to tell.2 In a thoughtful analysis for the journal Foreign Policy, Adams comments that advocates for AI celebrate its potential to decode intractable global challenges and even end poverty, but its achievements are meagre. Instead, global inequality is now set to rise. Countries that are readily able to incorporate AI into industry are set to see rising economic growth. But the rest of the world will be left further and further behind.

AI designed on largely English-language data is not often fit for purpose outside of wealthy Western contexts. The outputs they produce for non-Western users and contexts are often useless, inaccurate, and biased. Without stable internet access or smartphone technology, only 25% of people in sub-Saharan Africa have reliable internet access, and African women are 32% less likely to use mobile internet than their male counterparts.

In 2017, PwC attempted to put a price on the value AI would bring to national economies and global GDP. In a seminal report the consulting firm boasted that by 2030, AI would contribute $15.7 trillion to the global economy. China, North America, and Europe stand to gain 84% of this prize. The remainder is scattered across the rest of the world, with 3% predicted for Latin America, 6% for developed Asia, and 8% for the entire block of “Africa, Oceania and other Asian markets”.

Following the advent of generative AI technologies such as OpenAI’s GPT series, McKinsey estimated that this new generation of AI would increase the productive capacity of AI across industries by 15 to 40%. McKInsey identified sectors and productive functions set to achieve the most growth – high-tech industries (tech, space exploration, defence), banking, and retail.

By contrast, the industry likely to see the least growth is agriculture, Africa’s largest sector, and the major source of livelihoods and employment on the continent. Nevertheless, Adams is able to point to a growing number of cases demonstrating AI’s value in African agro-industries. In Tanzania, a researcher is using generative AI technologies to create an app for local farmers to receive advice on crop diseases, yields, and local markets to sell their produce. In Ghana, experts at the Responsible AI Lab are designing AI technologies to detect unsafe food.

But despite the efforts of African pioneers, as AI is adopted across industries, human labour for poorer countries is changing. There is now a new race to the bottom. Adams notes that machines are cheaper than humans and cheap labour that was once offshored is now being onshored back to wealthy nations. Collectively, the global south is home to just over 1% of the world’s top computers, and Africa just 0.04%.

Generative AI technologies threaten the rising middle class in developing contexts. The World Bank estimates that up to 5% of jobs are at risk of full automation from generative AI in Latin America and the Caribbean and that women are most likely to be affected.

While AI creates uncertainty for the poor, argues Adams, we are witnessing the largest transfer of income to the top brackets of society. According to Oxfam two-thirds of all the wealth generated between 2020 and 2022 was amassed by the richest one percent. And the richest is the new class of tech billionaires. AI designed to generate profit and entertainment only for the already privileged, will not be effective in addressing the conditions of poverty and in changing the lives of groups that are marginalized from the consumer markets of AI.

Adams concludes that the costs for poorer nations to catch up in the AI race are too great. Public spending may be diverted from critical services such as education and health care. Without a high level of saturation across major industries, and without the infrastructure in place to enable meaningful access to AI by all people, global south nations are unlikely to see major economic benefits from the technology.

Another author who stresses the potential downsides of AI is James Muldoon. In a recent book, Feeding the Machine: The Hidden Human Labour Powering AI, (London: Canongate, 2024)3 written with Mark Graham and Callum Cant, James Muldoon highlights seven key issues that need to be addressed: the hidden army of low-paid, often poorly treated workers in the Global South; the continuation of colonial power dynamics in AI supply chains; AI as an “extraction machine” profiting from human labour and resources; generative AI’s theft of creative work; the rise of a powerful “Big AI” conglomerate; the significant environmental cost of AI; and the need for collective political action to redress these inequalities. Ultimately, Muldoon aims to expose the ethical and social costs of AI development, urging a fundamental shift in power dynamics to create a more equitable and sustainable future for AI.

Super intelligent AI – a real threat?

Geoffrey Hinton, often called the “Godfather of AI,” laid the foundation for today’s artificial intelligence systems. His research on neural networks has paved the way for current AI systems like Chat GPT-4. In artificial intelligence, neural networks are systems that are similar to the human brain in the way they learn and process information. They enable Artificial Intelligence to learn from experience, as human beings would. He has recently sparked intense debate about the potential risks of super intelligent AI systems.

In a recent BBC interview with broadcaster Matthew Syed, Hinton expressed his growing worries about the potential for AI systems to emerge that will eventually challenge human agents. AI, he said, is more like people than conventional machine learning and GPT-4 for example knows more than any one person. He believes that AI systems are approaching or may have already surpassed human-level intelligence in certain aspects and is worried that “bad actors” may find ways to manipulate and destabilise our systems. He has even warned that autonomous weapons could be active on the battlefields of the future.4

AI is now used more and more routinely in healthcare, in education, research and many businesses and is claimed to have made substantial differences to the functioning of these fields. What is less commented upon, however, is the extent to which AI has been inserted into the arena of war and conflict. The ever increasing use of autonomous or semi-autonomous drones has changed the dynamics of the battlefield and led to innovations that enable the development of new, ever more deadly and accurate weapons systems.

In the military race to develop newer and ever more sophisticated weapons the voices of those who urge restraint, and call for restraints to minimize civilian casualties and refrain from targeting civilian infrastructure fall, as too often in times of war, on deaf ears. As ever, the victims of war and terror, have only limited options to flee, if they can, or to raise their voices in anger and despair.

The contrast between the use of AI as a tool for diagnosis and healing and a weapons system is stark. The temptation then is for the designers and users of AI systems to place the weapons issue into its own box where it will not impinge upon what is being done elsewhere. But leaving the AI tool chest unopened and in the hands of the “experts” who claim the right to determine what is relevant and important in these situations, is not a solution. The AI world is complex, but its complexity is not an excuse for ignoring the darker side of technological developments, such as global warming.

From super Intelligence to superagency

Another author who has entered the debate about the future of AI technologies is Reid Hoffman, the creator of LinkedIn. Hoffman’s book, Superagency: What could Possibly go Right with our AI Future,5 is the optimistic antithesis to the concerns raised by Hinton, Adams and Muldoon. Hoffmann is less fearful than Hinton and more optimistic about the possibilities opened up by AI technologies to enable people to do more.

He envisages AI as a transformative general purpose technology which people will be free to use as they wish and which will empower capabilities and adaptations to cascade through society. Hoffman does recognize that there will be many different kinds of trade-offs as society learns how to balance the competing demands that will inevitably arise as people try to determine the boundaries between different versions of the good, e.g. privacy and security, free speech and protecting minors and adults alike from abuse and hatred, more surveillance in the cause of safety and more or less tolerance for behaviours which are deemed to be risky and anti-social, In Hoffman’s vision the range of possibilities to be considered is growing all the time.

With AI, intelligence is a tool, but who will get to use it and for what purposes and in what contexts? The future that Hoffman envisages will need people to think long and hard about values of freedom, autonomy, privacy, human agency and the need to balance innovation with regulation. In an interview last year, Hoffman also underlined that people in the global South too need to be recognized as potential partners in developing appropriate technologies if the potential benefits of AI are to be realised.

Hoffman commented: “When developing AI applications, such as medical assistants, it’s crucial to create versions that serve both affluent markets and those with fewer resources…. If you make a medical assistant, don’t just provision it to people who might be able to pay you much more money than people in the Global South. Figure out how to do a version that also helps the Global South, even though the Global South may be poorer than a first-world market.”6

While we are busy contemplating the ethical implications of super intelligent AI, we are in danger of neglecting and underplaying the very real and present dangers posed by our current technological dependencies. AI is already more capable and more embedded in our systems than most of us realise. The chief security officer of Amazon, CJ Moses, has spoken about how generative AI is being used in efforts to disrupt critical infrastructure. In an interview with the Wall Street Journal he commented that that Amazon is seeing on average, 750 million disruptive attempts per day. Previously, they had seen about 100 million hits per day, and that number has grown to 750 million over six or seven months.7

Dealing with complex and vulnerable systems

The Amazon example shows that the world is facing a challenge of dealing with complex and vulnerable systems and attempts to disrupt them. This has to mean that we start “de-complexifying” our digital systems. New risks can emerge due to the complications and interplays of components and applications. In July 2024 the US cybersecurity company CrowdStrike released a software update which affected over 8 million computers running Microsoft Windows. The near global cyber failure which followed was triggered by the over-reliance of Microsoft-driven systems on CrowdStrike services; a stark reminder that our most pressing vulnerabilities lie not in the realm of science fiction but in the intricate web of digital systems that power our daily lives.8

In a post on the Diplo website, Jovan Kurbalija commented: “Taming these systems requires a mix of regulatory, standardisation, and awareness-building actions and initiatives. …. We can work towards a safer and more resilient digital future by holding tech companies accountable, implementing robust legal frameworks, fostering international cooperation, and rebalancing our focus between future risks and present vulnerabilities.”9

Because no one state (or more appropriately, the private corporations of one state) control the entirety of the infrastructure of cyberspace, Laura De Nardis argues that the problems surrounding cyber vulnerabilities, and the diffusion of insecure cyber-enabled technologies, requires a multilateral approach. She argues that cybersecurity is a growing human rights issue not least because control of cyber-physical infrastructure is a proxy for state power. She stresses that greater clarity surrounding liability and jurisdiction in the cyber-physical space is needed – and quickly.10

The worries that Hinton, Adams, De Nardis and others express deserve to be taken seriously. This does not mean, however, that every apocalyptic view of future AI development will inevitably come to pass. A strong pragmatic counter voice is provided by one of the pioneers of AI, Mustafa Suleyman, the author of the influential book, The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma and one of the main promoters of the case for dealing with potential harms through more and better regulatory action.11 At the end of 2023 he commented in an interview with Fortune magazine that there are “more practical issues that we should all be talking about, from privacy to bias to facial recognition to online moderation”.12

Suleyman argues that to bring AI regulation to fruition, there is a need for a combination of broad, international regulation to create new oversight institutions and smaller, more granular policies at the “micro level”. These remarks carry even more resonance today, following the latest announcements by Meta and Mark Zuckerberg, that they are abandoning their responsibilities for fact checking.13

Suleyman believes that a first step that all aspiring AI regulators and developers can take is to limit “recursive self-improvement” or AI’s ability to improve itself. Limiting this specific capability of artificial intelligence would be a critical first step to ensure that none of its future developments were made entirely without human oversight. “You wouldn’t want to let your little AI go off and update its own code without you having oversight,” Suleyman said. “Maybe that should even be a licensed activity – you know, just like for handling anthrax or nuclear materials.”

Without governing some of the minutiae of AI, inducing at times the “actual code” used, legislators will have a hard time ensuring their laws are enforceable. “It’s about setting boundaries, limits that an AI can’t cross.” To make sure that happens, governments should be able to get “direct access” to AI developers to ensure they don’t cross whatever boundaries are eventually established. Some of those boundaries should be clearly marked, such as prohibiting chatbots from answering certain questions, or privacy protections for personal data.14

Notes

  1. https://deepmind.google/discover/blog/demis-hassabis-john-jumper-awarded-nobel-prize-in-chemistry/
  2. https://foreignpolicy.com/2024/12/17/ai-global-south-inequality/ December 17, 2024
  1. Muldoon, et Feeding the Machine: The Hidden Human Labour Powering AI, (London: Canongate, 2024).
  2. https://www.bbc.co.uk/sounds/play/m0026nnc
  3. Reid Hoffman and Greg Beato, Superagency: What could Possibly go Right with our AI Future (Authors Equity, March 2025).
  4. Unlocking AI’s Potential: Insights from Reid Hoffman https:// com/mstds6yw
  5. Casey Newton, The phony comforts of AI scepticism (Platformer, Dec 5, 2024).
  6. 2024 CrowdStrike-related IT outages (https://en.wikipedia. org/wiki/2024_CrowdStrike-related_IT_outages).
  7. (The Overlooked Peril: Cyber failures amidst AI hype – Diplo, 18 Oct 2024) https://www.diplomacy.edu/blog/crowdstrike-cyber-failures-amidst-ai-hype/
  8. The Internet in Everything: Freedom and Security in a World with No Off Switch by Laura DeNardis Review Courteney O’Connor blogs.lse.ac.uk Nov 2, 2020.
  9. Suleyman, Mustafa and Michael Bhaskar, The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma (London:Bodley Head, 2023).
  10. AI’s existential threat is a ‘completely bonkers distraction’ https://tinyurl.com/2j44vepp
  11. https://ccrvoices.org/2025/01/13/meta-freedom-of-expression-for-whom/
  12. https://tinyurl.com/2j44vepp

 

Jim McDonnell (PhD) is a Member of the WACC UK Board of Directors. He has over 30 years’ experience in communications and public relations specializing in reputation and crisis management. He has a long-standing interest in the intersection of communication technologies with culture, values, and ethics. He has previously published “Putting Virtue into the Virtual: Ethics in the Infosphere” (Media Development 3/2018) and “Aligning AI systems with human values” (Media Development 1/2024).

No Comments

Sorry, the comment form is closed at this time.