Imagining immortality
59415
post-template-default,single,single-post,postid-59415,single-format-standard,bridge-core-3.3.1,qodef-qi--no-touch,qi-addons-for-elementor-1.8.2,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-smooth-scroll-enabled,qode-child-theme-ver-1.0.0,qode-theme-ver-30.8.3,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-8.0.1,vc_responsive,elementor-default,elementor-kit-41156

Imagining immortality

Philip Lee

Reflections on digital technologies and artificial intelligence, their potential to change the nature of being human, and the unintended consequences of a Promethean quest for scientific knowledge.

In his book Irish Nocturnes, philosopher and poet Chris Arthur observes the unease with which human beings contemplate how each and every one of us will be forgotten by the world in which we live:

“Our physical extinction is close-shadowed by a series of scarcely audible echoes of oblivion as, one by one, the pinprick glints of memory which may hold some likeness of us for a while gutter and go out” (Arthur, 1999: 60).
Arthur asks what survives of individuals such as Ramesses II, Shakespeare, Rembrandt, or Beethoven, and, therefore, what will survive of you or me? Sadly, the answer at the moment is relatively little, although the nearer the person is to the present age, the more there is that may last.

Of Ramesses II (Ozymandias in Shelley’s poem and the most popular candidate for the Pharoah of the Exodus), whose mummy is on display in Cairo’s National Museum of Egyptian Civilization, there remains the empty shell that housed his soul, but nothing to tell us the timbre of his voice. Of Shakespeare, the greatest plays in the English language, yet few traces of the man. Of Rembrandt, a magnificent series of self-portraits, whose pen and ink drawings tell us he was right-handed. Of Beethoven, an ear-trumpet, but no photographs.

Digging up a Viking burial mound or reconstructing a Chalcolithic face using forensic techniques can throw a shadowy light on the past. Yet “time purges the particular, the individual, into the anonymity of the nameless mass” (Arthur, 1999: 63) and what is uncovered is sometimes also unremarkable.

Until very recently the recording of history was a political enterprise. Official histories were those that created and reinforced national identities, imperial and economic boundaries. A recent issue of this journal (2/2023) examined archival justice’s claim for more fair and balanced representation in the public collections of information and data that frame society’s interactions with itself. Here, new technologies increasingly offer the opportunity to remember alternative lives and points of view.

Capturing sounds and images

Until well into the 19th century having a portrait painted was the prerogative of the rich, so it was fortuitous that the rise of a more affluent middle class coincided with the invention of photography, which transformed at a stroke how ordinary people saw themselves. The new medium was relatively cheap and professional photographers began to flourish. People did not have to be wealthy to have a “portrait photo” taken and entire families could be photographed at one sitting. People were now able to be the subjects as well as the objects of visual social history.

The first device that could record and reproduce sound was the “phonograph”, built in 1877 by Thomas Alva Edison (1847-1931), the most prolific inventor since Leonardo da Vinci. Essentially this was the machine that first allowed posterity to hear the voices and sounds of an earlier age. The initial success of sound recording was given a boost by the rapid development of radio and film. On Christmas Eve 1906, Reginald Fessenden (1866-1932), one-time chief chemist in Thomas Edison’s research laboratories, succeeded in transmitting a short speech, thus inaugurating wireless broadcasting.

The indefatigable Thomas Edison turned his attention to film, accidentally capturing “Fred Ott’s sneeze” as part of a publicity stunt on 7 January 1894, although most people credit the invention of cinema to the Lumière brothers, who showed films of a steam train arriving at a station and workers leaving a Lyons factory to a paying public in Paris on 28 December 1895.

Motion pictures started out as scenic shots of interesting locales (which evolved into documentaries), short newsworthy events (which evolved into newsreels), and filmed acts of famous performers like the American sharp-shooter Annie Oakley. The “silent era” ran from the mid-1890s to the period 1928-35, when most film industries switched to production with sound – a further instance of technological convergence. In parallel, radio developed as a medium for news, drama, light entertainment, jazz, classical music, and advertising.
For the first time in human history, people could see and hear about contemporary events – and about themselves as actors in history. They could be recorded aurally and visually, but they could also record themselves. When magnetic tape was developed at the end of the 1940s, closely followed by videotape (developed in 1956 but only available domestically from 1969), tape recordings and home movies could be sent to distant relatives instead of letters. Audiocassettes replaced reel-to-reel, videocassettes replaced home movies, and people literally took communication into their own hands.

The other great invention that enabled people to visualise themselves and their world was television. By 1948, after a lengthy period of development, millions in the USA found themselves watching coverage of the Republican and Democratic parties’ national conventions and the television era began with a vengeance. The public service broadcasting ethic of early television was increasingly challenged by commercial light entertainment in which the domestic and commonplace became daily fare and soap operas took up social questions such as teenage pregnancy, divorce, euthanasia, and homosexuality.

In 1969, the first version of the Internet was created and set up as a network (called ARPANET) between four university “nodes” in the USA. Rapid developments followed: email (1971); the Web (1993); web browsers; search engines; social media platforms. All were avidly seized upon as alternative ways of communicating that were initially unregulated and uncensored.

Digital convergence and integration

The first computers were assembled in the USA in the 1940s. The rapid developments that followed focused on reducing size and increasing speed and capacity. Today’s computers use integrated circuits with microcontrollers comprising a system of multiple, miniaturized and interconnected components fixed into a thin substrate of semiconductor material. Computers are desk-top, lap-top, hand-held, and “embedded” in other technologies and even in human beings. In mid-2022, scientists at the University of Michigan announced the development of a computerised “microdevice” measuring just 0.04 cubic millimetres – smaller than a grain of rice – whose potential use lay in a range of medical applications.

Digital technologies are used to store and interact with vast quantities of information. The Human Genome Project was a world-wide research effort aimed at analysing the structure of human DNA and determining the location of our estimated 70,000 genes. The information generated by the project became the source book for biomedical science in the 21st century, helping scientists to understand and eventually to treat many of the more than 4,000 genetic diseases that afflict humankind.

Important issues surrounding this research remain to be addressed. Who owns genetic information? Who should have access to it and how should it be used? How does knowledge about personal genetic information affect the individual and society’s interactions with that individual?

Also in the USA, the Visible Human Project (VHP) created anatomically detailed, three-dimensional representations of both the male and female bodies. The first “visible human” was Joseph Paul Jernigan, a 39-year-old Texan convicted of murder and executed by lethal injection in 1993. His body was frozen to minus 160 F and “imaged” with the same magnetic resonance and computer technologies used in medical diagnosis. He was then sliced into 1,878 millimetre-thin sections to be photographed and digitised.
By late 1994 Jernigan had been “reincarnated” as a 15-gigabyte database. One year later, the body of a 59-year-old woman from Maryland who died of a heart attack was given the same treatment. Her identity is unknown. Both digital bodies can be accessed via the Internet.

Little of the research that led to the Human Genome Project and the Visible Human Project could have been done without digitisation. The outcome of both projects is a digital blueprint of a human being. Couple this with work being done on AI – the science and engineering of intelligent machines (any machine that can accomplish its specific task in the presence of uncertainty and variability in its environment) – and it is only a small leap of the imagination to arrive at a digital replica that has the exact physical and mental characteristics of a particular individual.

And now there’s AI

The term artificial intelligence (AI) was coined as early as 1955, not long after computer scientist Alan Turing (1912-54) created a test to measure computer intelligence and Arthur Samuel (1901-90) developed a program to play checkers. Traditional AI and machine learning systems recognize patterns in data to make predictions. Generative AI – the brainchild of artist Samuel Cohen (1928-2016) – goes beyond prediction by generating new data as its primary output.

“The ideal characteristic of artificial intelligence is its ability to rationalize and take actions that have the best chance of achieving a specific goal. A subset of artificial intelligence is machine learning (ML), which refers to the concept that computer programs can automatically learn from and adapt to new data without being assisted by humans” (Frankenfield, 2023).

Such is the furore surrounding AI that it now comes with a warning. In early 2023, thousands of CEOs, technologists, researchers, academics, and others signed an open letter calling for a pause in AI deployments, even as millions of people started using ChatGPT and other generative AI systems. The letter began with AI’s “profound risks to society and humanity” and chastised AI labs for engaging in “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

Of course, people were quick to exploit AI’s capabilities. In June 2023, two New York lawyers were sanctioned for submitting a legal brief that included six fictitious case citations generated by an AI chatbot. The lawyers acknowledged using ChatGPT to draft the document and told the federal judge that they didn’t realize the tool could make such an error.

In October 2023, actor Tom Hanks wrote on Instagram, “There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it.” AI and its potential abuse were among the issues that led actors to go on strike in 2023 after warnings that “clones” – digital doubles – would prove disastrous for the profession.

In November 2023 in the United Kingdom, faked audio of London mayor Sadiq Khan dismissing the importance of Armistice Day and supporting the massive pro-Palestine peace march that weekend circulated among extreme right groups, prompting a police investigation.

AI has enormous potential in terms of helping to bring about greater social progress. It also holds the key to a form of immortality that challenges human notions of “Our brief finitude… in the vast darkness of space” (Holloway, 2004: 215).

Digitality anticipates immortality

By the end of the 19th century there were photographs of eminent and ordinary people. By the end of the 20th century there were digital audiotapes (DATs) of their voices and digital video discs (DVDs) of them in action. By the end of the 21st century, all that will have advanced immeasurably.

The logical outcome of convergent technologies and AI is that it will be possible to fabricate a digital replica of any person and to invest her or him with a complete biological and social life-history. Such a replica might take the form of a hologram that can dialogue about its/his/her life and even replicate certain abilities (such as dancing or playing chess). No soul – perhaps – but every other human attribute.

The idea seems fanciful until one looks at ongoing research into storage mechanisms for human memory, for which scientists are studying the architecture, data structure and capacity. Soon they will be able to design the kind of memory cards that today are plugged into PCs, devices connected to your brain that can record every moment of your lifetime. The idea is not new:

“Another way of thinking about technologically enhanced memory is to imagine that for your entire life you have worn a pair of eyeglasses with built-in, lightweight, high-resolution video cameras… Your native memory [will be] augmented by the ability to re-experience a recorded past… Thus, someday you may carry with you a lifetime of perfect, unfading memories” (Converging Technologies, 2002: 168)

Attractive though such a scenario may be, it raises questions about the nature of human being (ontology) and human knowledge (epistemology). And, as we know from debates around surveillance capitalism and biogenetics, fundamental questions about ownership and control: Who will decide whose data are worth keeping? Who will decide on their validity and authenticity? What measures need to be in place to prevent tampering with or rewriting the data?

Unconstrained by natural mortality, digital cyborgs will come to represent all that it means to be human. Our ways of speaking, our gestures, our memories, our spiritual beliefs will be encapsulated and capable of being replayed ad infinitum. Perhaps this is the real conundrum: not that AI will replace us, but that we shall replace ourselves and lose the essence of being human:

“Despite the immense power of artificial intelligence, for the foreseeable future its usage will continue to depend to some extent on human consciousness. The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computers might only serve to empower the natural stupidity of humans” (Harari, 2018: 71-72).

There is still time to think again.

References
Arthur, Chris (1999). Irish Nocturnes. Aurora, CO: The Davies Group.
Converging Technologies for Improving Human Performance (2002). NSF/DOC-sponsored report, ed. by Mihail C. Roco and William Sims Brainbridge. National Science Foundation.
Frankenfield, Jake (2023). “Artificial Intelligence: What It Is and How It Is Used”. Investopedia.
Harari, Yuval Noah (2018). 21 Lessons for the 21st Century. Signal/Penguin Random House.
Holloway, Richard (2004). Looking in the Distance. Edinburgh: Canongate Books.

 

Philip Lee is General Secretary of the World Association for Christian Communication (WACC) and Editor of its journal Media Development. His publications include Communication for All: New World Information and Communication Order (1985); The Democratization of Communication (ed.) (1995); Requiem: Here’s another fine Mass you’ve gotten me into (2001);Many Voices, One Vision: The Right to Communicate in Practice (ed.) (2004); Public Memory, Public Media, and the Politics of Justice (ed. with Pradip N. Thomas) (2012); Global and Local Televangelism (ed. with Pradip N. Thomas) (2012); Expanding Shrinking Communication Spaces (ed. with Lorenzo Vargas) (2020); Communicating Climate Justice (ed. with Lorenzo Vargas) (2022).

No Comments

Sorry, the comment form is closed at this time.