Aligning AI systems with human values
59508
post-template-default,single,single-post,postid-59508,single-format-standard,bridge-core-3.3.1,qodef-qi--no-touch,qi-addons-for-elementor-1.8.2,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-smooth-scroll-enabled,qode-child-theme-ver-1.0.0,qode-theme-ver-30.8.3,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-8.0.1,vc_responsive,elementor-default,elementor-kit-41156

Aligning AI systems with human values

Jim McDonnell

In the development and deployment of technologies based on AI, how can the voices of civil society organisations be raised against their potential risks and harms, but also for values such as equity, ethics, digital rights, control, choice, and transparency?

In 1847, the poet Ralph Waldo Emerson wrote “Things are in the saddle, and ride mankind.” His words find an echo today. This is a time when growing numbers of institutions, governments and the wider public find themselves agreeing that the development and use of Artificial Intelligence (AI) urgently needs to be regulated.

In March 2023 a group of leaders in the AI field issued an open letter calling for large scale experiments to be paused. The signatories described AI labs as locked in an out-of-control arms race and helping to create digital minds that “no one can understand, predict or reliably control.”1

Some voices, Elon Musk, for example, went so far as to warn that uncontrolled AI may lead to the extinction of the human race. The paradox is, of course, that Elon Musk and other technology billionaires are those who poured, and continue to pour, huge quantities of intellectual, computing, and financial resources into AI applications.

The difference between AI, General AI (AGI), and Generative AI (GenAI)

Since the open letter was published, the warnings and anxieties have proliferated. Fears are focussed on General AI (AGI), applications which once programmed, it is claimed, can perform as well or better than their human designers at an ever growing range of intellectual tasks. Traditional AI systems are programmed to perform specific tasks. They are trained to follow specific rules in order to undertake particular tasks, but they don’t create anything new.

Most attention has recently been paid to new so-called Generative AI systems (GenAI). These are being trained and retrained on huge amounts of data, using so-called LLMs (Large Language Models) in order to generate wholly new data. These LLMs don’t act like the search engines people have become used to, e.g. Google, but rather are predictive algorithms. They have been built so that they can recognize and interpret the underlying patterns of human language and other kinds of complex data from the internet. But unlike traditional computer models, Generative AI can create new content, for example, images and text.

The limitations of AI

However, the accuracy and reliability of the new AI systems cannot be taken for granted. It is no surprise that the Cambridge English Dictionary announced in November 2023 that its word of the year was “hallucinate”, defined as “to see, hear, feel, or smell something that does not exist.” Today the word is used to speak of Generative AI chatbots such as ChatGPT, trained on texts gathered from the internet that produce new but sometimes false content. Moreover, chatbots can now produce explanations which appear plausible which are, in fact, false or misleading.

People can be deceived into interacting with deepfake chatbots which are designed to produce disinformation or fake news. For example, a recent overview found that generative artificial intelligence (GenAI) is being developed to help make online disinformation campaigns even more powerful, for example, by rival political groupings, e.g. in Pakistan or the United States, or regimes such as Venezuela. Other authoritarian governments, for example, Russia, Iran, and China, use AI to enhance and refine online censorship.

Chatbots like ChatGPT have captured the public’s imagination because they generate text that looks like something a human being could have written, and they give users the illusion they are interacting with something other than a computer program. A growing number of critics worry about the development of AI “personal assistants”. These sophisticated chatbots are far more capable than Amazon’s Alexa. AI Inflection’s “Pi”, for example, is actually promoted as “Your personal AI” and it tells the prospective user that “My goal is to be useful friendly and fun. Ask me for advice, for answers or let’s talk about whatever’s on your mind.”

Such tools offer an experience of quasi-human emotional connection that can encourage vulnerable people, for example those who suffer from mental health problems or delusions, to develop unhealthy dependence on the chatbot. These kinds of problems pose huge challenges for the designers of AI systems and for those institutions that seek to monitor, regulate, and govern them.

An increasingly common downside of the chatbot generation of texts is that literary works and other forms of content are used by companies running AI applications in ways that are harmful or without the consent or knowledge of the original creators. With the headline “My books have been used to train AI bots-and I’m furious” the British author Sathnam Sanghera summed up the reaction of many writers, including Hollywood screen writers who have recently been successful, at least for now, in protecting their scripts from being used without permission.2

The anger and concern of those who see AI models build on their creative work without permission is mirrored in a more widespread unease that we are losing control over our own personal data and how that data is used. Among those who are concerned is the creator of the internet, Tim Berners-Lee. He argues that a key challenge – now that so much personal data is linked to web applications – is to build a framework enabling control by internet users so that they can protect their rights and personal data.

Berners-Lee has devised a system called Solid. Data about a user or entity is placed in a personal data store (a Solid Pod). Using Solid, it is the user who decides which web applications can access that data. Solid is already being trialed by the BBC among others. Will the Berners-Lee vision come to fruition? Only time will tell. But his willingness to experiment with alternative models is refreshing.

Berners-Lee reminds us that vast amounts of data are the raw material for huge numbers of commercial and other applications, including AI processing. Many voices have pointed out that the quality of the data mined is subject to many flaws, not least because the internal processes for collecting and processing are so opaque. Inevitably, lack of transparency means it is harder to filter out material that is derogatory, racist, abusive, deliberately misleading, simply incorrect or biased in a myriad of ways.

Women and girls, for example, are stereotyped more than men and suffer from less access to technology.3 And sometimes removing toxic content can itself lead to exploitation. Time Magazine found, for example, that Open AI, in developing ChatGPT, used outsourced workers in Kenya earning less than $2 per hour to screen thousands of texts culled from the darker corners of the web.4

Moreover, the overwhelming concentration of data produced in the Global North means in practice de facto exclusion of a huge amount of material that reflects the concerns, tastes, perspectives and cultural insights of around 80% of humanity. In addition, the processes of AI data mining consume a huge amount of energy and financial resources while also contributing to an ever-growing carbon footprint. And, of course, any impact on climate change tends to disproportionately affect those living in the global South.

Regulating AI for the common good

Like all technological developments, the current AI wave does not determine the future. Human beings can and will invent creative alternatives. But they need to be much better informed about how the technology works and much more wary about the risks they face online. In short, there is a huge effort required to promote digital literacy and digital rights. This means involving citizens in the wider public discourse about how technologies like AI could be shaped and regulated for the wider public good.

Various initiatives by governments, big tech companies, think tanks, universities and civil society are under way to formulate regulatory and governance proposals that will provide some measure of oversight of the AI field. The AI Safety Summit, held in November 2023 at Bletchley Park, just north of London, brought together tech companies like Google, Meta and Microsoft and leading AI developers. Companies such as Stability AI (a partner of Amazon Web Services), Inflection AI, (developer of the personal AI assistant Pi), and Open AI (which developed Chat GPT) were, of course, central to the conversations at Bletchley. In addition, they were joined by stakeholders from governments, multilateral organizations like the European Commission and the UN. However, only a handful of civil society organisations including a few human rights organisations were invited.

Coinciding with the Bletchley Park summit the US issued an executive order requiring federal AI usage to respect civil rights and protect national security. The EU announced that it was close to passing legislation on regulating the use of AI. At the end the participants issued a Declaration which included pledges to ensure that AI is “designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.”5 To the surprise and a certain relief of some sceptics, it also focussed not only on avoiding AI linked catastrophes, but also on wider “priorities such as securing human rights and the UN Sustainable Development Goals.”6

The Summit was declared a success and participants committed to continue the process, yet two big questions were left unanswered. The first, to what extent will states actually be able to regulate AI development and hold the technology companies accountable? The second, how can the process be opened up to bring in the insights and concerns of the wider public?

In a thoughtful article for the Guardian before the AI summit at Bletchley Park, the technology commentator John Naughton identified “three basic truths about AI” which democracies will have to recognize:

“The first is that the technology is indeed fascinating, powerful and useful for human flourishing. The second is that – like all technology – it has potential for benefit and harm. It will also have longer-term implications that we cannot at the moment foresee… So we’ll have to learn as we go. And finally – and most importantly – it’s not the technology per se that’s the critical thing, but the corporations that own and control it. Whether AI turns out in the end to be good or bad for humanity will largely depend on if we succeed in reining them in.”7

What Naughton doesn’t mention, however, is that the reining in of the tech giants (and, by extension, authoritarian states) has also to involve real partnerships with the countries of the Global South, especially those which are struggling to build and sustain democratic governance. As the Centre for AI Futures at the School of Oriental and African Studies (SOAS) puts it, smaller nations, which are “experiencing the many social, political and cultural disruptions brought about by new forms of algorithmic governance” often find their views pushed aside in favour of the interests of major players located in the US, EU and China.8

The need for agreement on the international governance for AI is highlighted in a recent article published by the human rights and digital technology company, Global Partners Digital. In a survey article published just before the Bletchley Summit, they commented:

“Whatever form of international governance for AI emerges, the key takeaway from this research is the urgent need for it to be shaped in a more open, inclusive and transparent manner….Only a diverse range of perspectives and stakeholders, especially from those in the Global South can ensure that benefits from AI are equitably harnessed across the world and that the implementation of AI technologies does not reproduce existing inequalities and power imbalances.”9

In Reframing AI in Civil Society Jonathan Tanner and Dr John Bryden reveal how the British media and public think about AI.10 Tanner and Bryden identify four dominant mental “frames” that shaped public attitudes: (1) AI represents progress; (2) AI is hard to understand; (3) AI presents risks to human beings; and (4) Regulation is the primary solution to AI risks.

Though the public is strongly in favour of regulation, and regulation is essential, it is not a panacea in itself. There are many questions, and there will be more, to ask about the kind of regulation needed, its oversight, accountability, and responsiveness in a fast developing sector. Civil society organizations that wish to raise public awareness and the level of public debate about AI, are urged by Tanner and Bryden not to get too focused on the regulatory issue:

“The risk and regulation agenda strongly suits the interests of big AI companies who can position themselves as providers of technological solutions. System-wide or society-level risks could end up overlooked in favour of technical issues that AI companies can more easily demonstrate they are mitigating.”

Tanner and Bryden point out that getting bogged down in regulatory questions, important as they are, may only strengthen a focus on short-term issues and neglect a long-term wider vision. There is a pressing need to draw attention to other questions.

What would a society look like that enables all citizens to leverage the upsides of digital technology? What values would that society have to put at the heart of how technology is developed and deployed? What skills would people need in order to navigate that society effectively and which organizations have the courage to bring such a future to life? If we can answer these questions, we have a starting point.

According to Brian Christian’s increasingly prescient book, The Alignment Problem (2020), the fundamental, pressing need facing society is to find robust ways to align AI systems with human values.11 To quote D. Fox Harrell,

“People can intentionally design computing systems with the values and worldviews we want… [but]… We need to be aware of, and thoughtfully design, the cultural values that AI is based on. With care, we can build systems based on multiple worldviews – and address key ethical issues in design such as transparency and intelligibility.”12

This is the challenge facing WACC and other civil society organizations. The current debate, such as it is, centres on risks of AI, the harms it might cause, safety and regulation. How can we raise our combined voices effectively to bring to the forefront other vital issues too: human and cultural values, equity, ethics, digital rights and citizenship, questions of power and control, choice and transparency?

Notes

1. https://futureoflife.org/open-letter/pause-giant-ai-experiments/

2. Sathnam Sanghera, My books have been used to train AI bots – and I’m furious (The Times, 29 September 2023) https://www.thetimes.co.uk/article/my-books-have-been-used-to-train-ai-bots-and-im-furious-708p2n20c

3. https://www.cigionline.org/articles/generative-ai-tools-are-perpetuating-harmful-gender-stereotypes/

4. https://time.com/6247678/openai-chatgpt-kenya-workers/

5. https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023

6. See https://theconversation.com/bletchley-declaration-international-agreement-on-ai-safety-is-a-good-start-but-ordinary-people-need-a-say-not-just-elites-217042

7. John Naughton, AI is not the problem, prime minister – the corporations that control it are (Guardian, 4 November, 2023) https://www.theguardian.com/commentisfree/2023/nov/04/ai-is-not-the-problem-prime-minister-but-the-corporations-that-control-it-are-rishi-sunak

8. https://www.soas.ac.uk/about/research-centres/centre-ai-futures

9. https://www.gp-digital.org/navigating-the-global-ai-governance-landscape/

10. Jonathan Tanner and John Bryden, Reframing AI in Civil Society: Beyond Risk and Regulation. (2023) https://lnkd.in/e6Urfnxd

11. Brian Christian, The Alignment Problem, 2020.

12. D. Fox Harrell, AI can shape society for the better – but humans and machines must work together (Guardian, 18 August 2023) https://www.theguardian.com/commentisfree/2023/aug/18/ai-society-humans-machines-culture-ethics

Jim McDonnell (PhD) is a Member of the WACC UK Board of Directors. He has over 30 years’ experience in communications and public relations specializing in reputation and crisis management. He has a long-standing interest in the intersection of communication technologies with culture, values, and ethics. In 2018 he wrote Putting Virtue into the Virtual: Ethics in the Infosphere (Media Development, December, 2018).

No Comments

Sorry, the comment form is closed at this time.