The ethics of AI and robotics
47424
post-template-default,single,single-post,postid-47424,single-format-standard,bridge-core-3.3.1,qodef-qi--no-touch,qi-addons-for-elementor-1.8.2,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-smooth-scroll-enabled,qode-child-theme-ver-1.0.0,qode-theme-ver-30.8.3,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-8.0.1,vc_responsive,elementor-default,elementor-kit-41156

The ethics of AI and robotics

By Anne Foerst

Artificial Intelligence (AI) has come a long way since the 1960 when it first appeared. When reflecting on modern AI, there are several ethical questions that come to mind. The first, most obvious one, is how to deal with the loss of jobs that will invariably be a consequence of new AI developments. Another is the question of the moral status of robots and other AI-entities. Finally, the question arises how a successful AI might challenge our understanding of ourselves. I will address these questions one by one.

There is no doubt that AI will replace humans in the job market. AI driven robots and other machines have become better and more tactile, replacing many menial tasks. The progress has been especially rapid in harvesting: produce like grapes that were always harvested by hand can now increasingly be harvested by robots. But machines have been used in factories and agriculture since the industrial revolution; today’s machines can be used for more tasks but this doesn’t present a qualitative change. 

Where a major change is occurring right now is in the service industry. Social robots have become more autonomous, replacing jobs that would have been unthinkable to be performed by robots even a decade ago. Robot cleaners and lawnmowers are ubiquitous and dishwashing robots will follow soon (https://futurism.com/the-byte/samsung-bot-handy-dishwasher). The first robot waiters are already working with great success (https://www.abcactionnews.com/news/region-sarasota-manatee/robot-waitress-helps-local-restaurant-serve-food-during-labor-shortage) and there are also robotic bartenders (https://www.youtube.com/watch?v=Oo6G_Leek2w). Robots are used in childcare as playmates (https://www.wsj.com/articles/pandemic-tantrums-enter-the-robot-playmate-for-kids-11596542401). They provide companionship as caring pets for elderly people with memory problems in elderly care facilities (see Paro, the furry and snuggly companion http://www.parorobots.com/), and are so helpful that New York State just ordered hundreds of robotic caregivers as companions for the elderly in their homes to address the loneliness problem (https://www.theverge.com/2022/5/25/23140936/ny-state-distribute-home-robot-companions-nysofa-elliq). 

Robots will soon replace paralegals (https://www.findlaw.com/legalblogs/greedy-associates/a-robot-already-got-your-paralegal-job/), they will work as physicians’ assistants (https://www.aapa.org/news-central/2017/06/robot-will-see-now/), and have already been working for quite some time as surgeons’ assistants (https://en.wikipedia.org/wiki/Robot-assisted_surgery). 

I could give many more examples of AI doing jobs that we thought only humans could do. But already this array of smart machines leads to an important question. How will society change when these robots become commonplace? Optimists believe that machines will be freeing humans from demeaning tasks. These benevolent machines would then produce capital that can be distributed to humans in the form of a universal income which would make up for the loss of jobs. Pessimists on the other hand believe that – even more so than today where some tech moguls belong to the richest people on earth – better technology will concentrate money in the hands of a very few, creating injustice and widespread poverty for the rest of us. 

It will be a challenge for politics to guide us into a future that will create a more equal society than pessimists envision, but we all need to speak up so that a more benevolent machine future comes to pass. 

Also, while service robots such as waiters are a novelty and therefore fun to play with, when they become commonplace people will probably yearn for the times where humans served them in restaurants and bars, when kindergarten teachers took care of our children, where nurses took care of our frail elders, and where flesh-and-blood nurses weighed us and took our blood pressure. Even if we could solve the problem of just wealth distribution, is this a world we envision? Personally, I am a fan of robots and wouldn’t mind encountering them in various areas but we need to decide as whole society if this is the future we want, and act accordingly.

To discuss one example in more depth, robots can replace some nursing staff in elderly care facilities as companions and can also provide companionship at home. On the one hand, it is great if technology can address the problem of many elderly people being lonely, a problem that was exacerbated by the Covid pandemic. On the other hand, we have to ask ourselves why a rich society like ours doesn’t pay nurses in elderly care facilities enough so that there are way too few of them for the jobs that are available to treat today’s patients. Also, why are the elderly at home increasingly isolated and lonely? If more and more jobs fall by the wayside thanks to AI, perhaps that problem will diminish as people have more time to interact with their family members and friends. This, then, might be the positive side of less jobs as people will have more time to be socially active. 

Another example that needs to be discussed is online learning. The pandemic has shown us clearly that in-classroom teaching is far more effective than online teaching. In fact, we will have to deal with an education gap particularly for low-income families that has widened because of the long periods children were taught via Zoom and other learning platforms. It seems that teaching cannot be replaced by machines – not yet, I might add. 

The moral status of AI

Since we face a society in which AIs will play an increasingly large role, it behoves us from an ethical perspective to ask what is the moral status of these creatures of our ingenuity. The most important disclaimer first. Machines are nowhere near complex enough yet that they can’t be turned on and off, copied, and modified. As long as this is the case, their rights are questionable but still worth considering. 

Already in the early 2000s, psychologists wanted to find out to what extent we bond with machines. In one experiment, elementary school teachers and computer specialists were asked to evaluate a deliberately bad teaching program for elementary school students. After they had tested the program for a while, the computer on which they worked asked them to evaluate its performance. For the most part, people responded positively. 

Afterwards, these same testers were led into another room with other computer terminals and were asked to evaluate the learning program again. Here, on these different computers, their answers were less positive about the quality of the tested software but they still sounded somewhat satisfied. Finally, a human with pen and paper asked the testers for their opinion on the software and the testers were appropriately very negative about it and all agreed that such programs should never be used in school.

The testers had not voiced these criticisms to either the computers they had tested the program on, or to the computers in the other room on which they had done a second evaluation. These same people, when asked if they would ever be polite to a computer or think they could hurt its feelings, rejected such a notion vehemently.

This experiment suggests that we seem to apply our rules of politeness to non-human entities such as computers. The participants in the experiment apparently did not want to hurt the computer’s feelings. They even assumed a level of kinship between different computers and, therefore, applied similar rules of politeness on the computer on which they did a second evaluation. They didn’t tell these machines their true, very critical opinion either not to hurt the feelings of the second computer by criticizing one of its “fellow computers” or because they thought that the second would tell the first what had been said.

In another experiment, people and computers were placed inside a room. Half of the computers had green monitors while the other half had blue monitors. Half of the people wore green arm badges; the other half wore blue ones. All together played interactive games and the people with blue arm badges were much more successful when using computers with blue screens to reach their goal than using “green” machines. The same, of course, was valid for the other side. So, slowly, the people with green arm badges bonded with the green-monitored machines and the “blue” people with the “blue” machines.

After approximately half an hour, the people wearing the blue arm badge expressed more solidarity with the computers with the blue screens than with the humans with the green arm badges; the same was true for the humans with the green arm badges. It seems that through the interactive games and the experienced benefit of interacting with the machines with one’s colour code, the colour code took over as a definition for “my” group. The entities with the other colour code, no matter if humans or machines, tended to be rejected. Through the interactive games, communities were created that contained both human and non-human members.

It seems that somewhere during our interactions with a computer we do start to assume that a computer is as sensitive as a human. Therefore, we behave politely and don’t want to criticize it openly.

We also seem to bond with the entities of our own group no matter if they are human or not. No animal has an “inbuilt” sense of species recognition which means that it is not part of our biological make-up automatically to treat all humans better than all other beings. 

Humans seem to be able to accept anyone or anything into their group with whom they can sufficiently interact. As soon as such a stranger is accepted into a group, he, she, or it is seen as an equal part of the group; that group defines itself by the members that both belong and do not belong to it. After all, humans are educated from birth on how to interact with their fellow human beings. It is necessary for a baby to be able to do so as its survival depends on it. 

Throughout our lives, we learn patterns of behaviour – such as being polite and not openly criticizing someone. It is very easy to apply these ingrained rules to every entity we interact with. It is very hard to not do so as it demands a conscious effort of us.

The behaviour of treating non-human objects as if they deserved some form of politeness or regard and were somewhat like us is called anthropomorphism, the human ability to morph/change everything into a human and treat it accordingly. Usually, the term has a slightly negative connotation. Theologians especially criticize human terms used to describe God as “shepherd” or “father”, or, within patriarchal structures, as an old, usually Caucasian, man with a long white beard. 

The experiments described above suggest, however, that anthropomorphization is the initial and natural response to anything we interact with; it takes a conscious effort not to anthropomorphize. As social mammals, we are best when we interact and any use of these trained and built-in behaviours is easy; anything else is hard.

Today’s machines are far more socially intelligent than the machines from 20 years ago. I often catch myself wanting to thank Alexa when it (or is it a she??) answers a question or plays the music I was just in the mood for. It is natural to do so since such social mechanisms are ingrained in us. But while I clearly bonded with my machine, I wouldn’t reject an upgrade if one became available and were clearly better than the Alexa I have. But I can also understand people who have bonded with their machines so much that they would hate to give them up. Their relationship is not with an exchangeable entity but with a specific hardware to which they assign personhood. 

Most accounts of personhood use the concepts of “being human” and “being a person” interchangeably and as ethical categories. Every human being deserves to be treated as a person even if he or she is incapacitated (through a disability, disease, or rejection by other human beings).

Against this position stands the opposite understanding that ties personhood solely to capability: any being can be a person when capable of symbolic processing and any being that is not capable of it is not a person. According to this scenario, people in a coma, severe dementia and similar incapacities as well as human babies are not seen as persons, while well trained chimps are.

People use the second stance when arguing against the personhood of AIs personhood as AIs cannot currently do all that humans are capable of. However, as we have seen, that gap closes more every day and with every new invention. As for symbolic processing, machines like OpenAI’s Generative Pre-Trained Transformer 3 (GPT-3) (https://www.nytimes.com/2022/04/15/magazine/ai-language.html) which has been around since 2020, can have philosophical discussions and would clearly pass the Turing test (the generally accepted intelligence test for machines), as conversations with it are like talking to another adult human being. So, AIs will soon pass this this test of personhood as well. 

Theologically, we can understand personhood as assignment to us from God when God created us as divine statues. Rather than praying to and value a divine statue of clay, each human being is such a statue and should be treated accordingly. That means ultimately, we assign personhood to individuals not based on their capabilities but based on their interaction with us. Personhood is not assigned to a species as a whole (as we lack the recognition of this concept) but to individual beings, independent of their species or biological (or non-biological!) features. Do, therefore, all AIs have moral status? I would answer that question negatively but would at the same time argue that an individual AI can indeed be assigned moral status and the status of personhood when it has bonded with an individual human being, and that such bonds need to be respected. 

There are however dangers implicit in that stance. Alexa and Siri and other agents have to obey us and are strictly servants. They do what we want them to do. This might become a problem when we expect humans to act like Alexa and obey us as well. When we have sex with a sex bot who caters exactly to our wishes, we might forget that sex between humans is give and take, a meeting of two types of skin and two types of phantasy as Casanova put it.

In other words, social machines might spoil us for the interactions with humans who are all individuals with their own wishes and desires which makes compromises necessary. But this is not the fault of the machines but rather of human egotism where individuals want their wishes to be fulfilled without accepting that their freedom ends where the freedom of another individual begins. Machines have not yet been given the status of going against our wishes which makes their objective moral status tenuous at best. 

AI and our self-understanding

With Copernicus and Galilei came the insight that the earth is not the centre of the universe. With Darwin we learned that we are related to all other creatures on this planet and belong to the family of great apes. With AI we have to learn that even our intelligence, that last holdout for human specialness, is not so special after all and can be rebuilt. This teaches us humility as we are not that special but can invite the similarities between us and other animals and between us and machines. 

There is a school of thought that sees in the whole AI project hubris and self-aggrandizement. I belong to another school where AI makes us humble. It teaches us how amazing we humans really are so that we can be grateful for how wonderful we have been made. But it also teaches us humility because, despite all attempts, it still is so very hard to build something that approximates our capabilities or that of other animals.

But what else other than humility can we learn from the creatures built in our image? Early AI assumed that once we had solved the problems of chess, mathematical theorem proving, and natural language processing, we would have achieved true AI. Well… 

The world champion in chess, Gari Kasparov, was beaten by an IBM machine in the late 90s. Mathematical theorem proving has long been done by machines and especially through the interactions of humans and computers. And projects like GPT-3 have solved the problem of natural language. But even much more primitive systems like Siri and Alexa are quite capable of understanding simple sentences and producing them autonomously. And yet, there is no machine (yet) like a human being who is not only capable of these feats but also of singing and dancing, laughing and crying, cooking and cleaning. We have machines that are capable of one or two of these feats, but not all of them. 

First of all, there is the insight that simulating rational thought processes requires much less computational resource than performing sensorimotor functions. Intuitively that doesn’t make sense to us. We find chess hard but putting butter on a slice of bread easy. For a machine it is exactly the opposite. Purely rational capabilities are relatively easy to program and machines excel at them. But physical tasks that seem easy to us because we have been doing them since birth are really hard to do for machines. 

This teaches us one important thing about ourselves. Many of us are still thinking dualistically of the body as the vessel that carries the brain around, with the brain doing the heavy lifting. The search for AI teaches us how wonderful our bodies are, and that our embodied intelligence is what still distinguishes us from machines. The fact that autonomous vehicles are still not fully functional shows how embodied our intelligence is and that we take for granted our embeddedness in our surroundings, something that AIs still are not fully. Of course, we share this embeddedness with other animals, so it is our very animal nature that is hard to rebuild in machines.

In addition, current AIs are expert in one and only one task. The machine bartender cannot drive a vehicle, and Roomba and other AI-based vacuum cleaners cannot discuss poetry. The robot surgeon cannot replace a car-mechanic, and Paro is cute but cannot make medical diagnoses. 

This is why the newest front in AI is AGI: Artificial General Intelligence. Rather than building machines that are good at one task or two, the goal is to build machines that can do multiple tasks and, more importantly, can apply what they learned for one task to a different job. Because this is where humans and other animals excel at. We are universally intelligent and not single trick ponies. Here we can have another reason to be grateful for being the way we are and, yet, humbled by the difficulty of building something that comes close to the way we are. AGI is a step in the right direction but still falls far from the goal of human-like artificial intelligence.

In conclusion, I am looking forward to a world with more AI, where we interact with the results of our human imagination and creative power. At the same time, we need to be aware that on the way towards such a future we have numerous problems to solve and rebuild our society to integrate our technological children as well.

Dr. theol. Anne Foerst is Professor of Computer Science and department chair at St. Bonaventure University (SBU). She also directs the Individualized Major program at SBU. Foerst has widely published on AI and theology. Her current research interest is cybersecurity ethics.

No Comments

Sorry, the comment form is closed at this time.