AI and you: What we all need to know
65390
wp-singular,post-template-default,single,single-post,postid-65390,single-format-standard,wp-theme-bridge,wp-child-theme-WACC-bridge,bridge-core-3.3.4.4,qodef-qi--no-touch,qi-addons-for-elementor-1.9.5,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-smooth-scroll-enabled,qode-child-theme-ver-1.0.0,qode-theme-ver-30.8.8.5,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-8.7,vc_responsive,elementor-default,elementor-kit-41156,elementor-page elementor-page-65390
Outline of a person's face against an orange and white background with binary code overlay

AI and you: What we all need to know

When ChatGPT was first unveiled as a publicly accessible application in November 2022, I was skeptical of the general usefulness of generative artificial intelligence (AI). Even more, I was concerned about the ethical implications.

By WACC Deputy General Secretary Sara Speicher

Will it magnify misinformation in a volatile political climate? Will it perpetuate racist, sexist, and cultural stereotypes? Is it just another tool to collect our personal data? What will it mean for human creativity and livelihoods—and at what cost to the environment?

From meetings and exchanges since then with many churches and ecumenical organizations in different countries, I know Christians—like the rest of society—have vastly different views on the potential and pitfalls of this rapidly developing area of digital technology.

Some bought into the AI vision immediately—literally. One church member in England told me that when ChatGPT first came out, he purchased a lifetime membership. I didn’t even know that was a thing. One of the big church-related development agencies in Scandinavia instructed their staff early on to use it to prepare presentations and reports to save time and money. (When I recently tested an AI application to prepare a slide presentation, I got pictures of cows to go with my points—and I can assure you that nowhere in my text did I mention cows.)

I’ve also had exchanges with those who feel it is the Christian responsibility to denounce not just the application but the efforts to create intelligence that strives to be superior to humans. Some refuse to use any such applications due to their role in militarization, private profiteering, surveillance. And others just don’t see the need.

Since 1968, WACC has been concerned about the ethics and operations of mass communication. So many of us take communication for granted, not seeing how it shapes us as individuals and as a society. We don’t always see how beliefs and behaviors are influenced by the manipulation of communication and media. Many more now see this, with social media’s direct and sometimes rapid influence on what we perceive as “truth.”

WACC has long upheld the idea of “communication rights,” which includes freedom of expression but encompasses all that is required to make our information ecosystem just and fair for everyone: access, participation, education, accountability, transparency, inclusion, equality.

Whether you are a champion or opponent of AI, the key action we must all take is to be critically aware.

Sara Speicher

All these apply, too, when we look at the growing pervasiveness of artificial intelligence not just in communication but in technological systems beyond our ordinary control, from health to defense. As the technology becomes increasingly embedded in everyday systems, it is, frankly, impossible to avoid or be neutral about its use. As one publishers’ conference stated, “AI is here to stay and will only become more pervasive.”

Whether you are a champion or opponent of AI, the key action we must all take is to be critically aware. There are three areas to explore constantly: what artificial intelligence means and what it does, the significant ethical issues behind it, and what we can control.

What AI is and what it does

Artificial intelligence, as a term and distinct scientific pursuit, has been around since the 1950s. AI refers to computer systems that perform complex tasks normally associated with human perception, reasoning, decision-making, creativity. It encompasses different but related processes, such as machine learning, large language models, deep learning, artificial neural networks, and generative AI.

At the heart of it is the collection of data, in which the machine learns patterns and can perform a specific task. The larger the data set and the more it performs the task, the better it gets. This is how Alexa learns your preferences, and your social media feeds get filled with diet ads after you look up ways to lose weight.

For any appliance called “smart,” think AI. Using Google Maps? Calling an Uber? Learning a new language on DuoLingo? Translating text via DeepL? All these and so many more pull in enormous amounts of data and use AI to personalize your request or experience.

With ChatGPT, we were introduced to generative AI, which, based on its large collection of data and our specific prompt, generates something “new”—new text, images, brainstorming ideas, summaries. These applications constantly improve their responses as they collect more data—including from our involvement, consciously or unconsciously.

The ability of AI systems to analyze huge amounts of data that scientists would take years to amass, much less review, has led to breakthroughs in healthcare—from detecting some cancers faster and more accurately to designing drugs faster and with a greater success rate. This direction won the Nobel Prize in Chemistry in 2024. More scientific breakthroughs are eagerly anticipated.

Behind AI

Even as we use the vast variety of AI applications to translate, summarize, research, brainstorm, draft, proof, illustrate, and more, it is important to understand the limitations of AI. It is also important to be aware of the social, economic, and political implications behind the technology, which require our informed and responsible use.

Some fundamental problems with AI output, especially in the generative AI applications used by the general public, continue even with rapid improvements. These include:

  • AI can make things up. It can provide plausible quotes with accurate looking references that in fact don’t exist. It can make up “facts” and refer to places and items that don’t exist.
  • It can perpetuate gender, racial, and cultural biases and stereotypes. This may range from assuming a leader is a man, to not being able to show a black doctor, to replicating offensive texts and images.
  • Its mathematics and cause and effect reasoning can be flawed.
  • Ultimately, AI is only as good as the data it has been trained on and the quality of the prompt it is given, meaning the potential for misrepresentation and partial results is a constant issue. This may get magnified as AI-generated content is now being added to its data set, perpetuating misinformation.

There are, of course, larger ethical considerations.

Data: Especially when the AI models were being trained, the tech companies behind them just hoovered the data from what was on the internet, considering it all fair use. But there are also huge amounts of material under copyright on the internet, from artists’ creations with a watermark to material behind paywalls. There are a number of high-profile copyright and artistic integrity battles, from the New York Times to Studio Ghibli.

How does this concern us? You may inadvertently use material under copyright, for one thing. And the other continuous issue is that AI developers continually train the models with our data, whether we are specifically asked or not. How many times do you read terms and conditions before adding a new app, for example? In the digital world, the adage is almost universally true: If something is free, you are the price.

Environment: AI processing requires extraordinarily energy- and water-intensive data centers, which are far beyond the already environmentally costly systems required for cloud computing. In addition, the mineral extraction for our digital devices is now a key driver of political and economic conflict.

Economics and politics: Our digital transformation has led to the dominance of “Big Tech,” a small number of extraordinarily powerful private companies accountable primarily to their shareholders and able to limit scrutiny by public interest media. Among many other concerns, their businesses largely determine what information we see, and thus our behavior and decisions.

Power: And of course, AI is used and misused for power and disruption. Governments use it for surveillance against human rights defenders, for military operations, and ever more persuasive propaganda. Individuals and networks create deep fakes and other forms of disinformation to discredit opponents and destabilize communities. There are many more issues, especially as AI rapidly develops, that we must carefully monitor.

What can we control?

All this may leave you in the same place you were at the beginning of this article. What do I—can I—do in this AI “revolution”? With AI integrated more and more into our systems, it may seem that we don’t have much choice. But here’s what you can do to be digitally literate:

  1. Make sure you learn about AI developments—not just what they do but the ethics behind them and their companies.
  2. Take time to learn applications appropriately. Read the terms and conditions before you give away your data rights. Organizations should develop guidelines for staff and volunteers on what use of AI is permitted and how the use of AI is disclosed in your work.
  3. As a general rule, don’t put anything into an AI system that you wouldn’t share with a stranger on the street.
  4. Review and fact-check what AI produces. You are ultimately responsible for the result.
  5. Do what you can for the environment. “Think before prompting” to avoid unnecessary demands on computing power and water. Keep devices as long as possible, and recycle appropriately.
  6. Be critically aware. In an age where misinformation, disinformation, and deep fakes are so prevalent, we must be aware of everything we see, especially on social media and with media organizations that do not have standards of accurate reporting. If something seems sensational or too good to be true, it probably is false. Check the facts with a reputable fact checker like Snopes or FullFact.

Individuals, churches, and organizations need to have a basic understanding of the potential of new tools, how to use them responsibly, and the larger ethical issues around them that may require our advocacy for justice.

The bottom line is that our online world dramatically affects our physical world. In both spaces, we need to act and witness toward the world we want to see.

This article and image first appeared in the September 2025 issue of Messenger, the official publication of the Church of the Brethren. Used by permission.

Image: Paul Klee’s One Who Understands, (1934) illustrated with binary code by Paul Stocksdale. Church of the Brethren.

Want to get digital media literate?

Check out “Just Digital”! WACC’s fun, self-paced e-course will help you navigate online wisely and advocate effectively. The two-module course can be followed by individuals or groups. Free for WACC members.

No Comments

Sorry, the comment form is closed at this time.