Diverse voices speak for digital rights, AI accountability
66870
wp-singular,post-template-default,single,single-post,postid-66870,single-format-standard,wp-theme-bridge,wp-child-theme-WACC-bridge,bridge-core-3.3.4.6,qodef-qi--no-touch,qi-addons-for-elementor-1.9.6,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-smooth-scroll-enabled,qode-child-theme-ver-1.0.0,qode-theme-ver-30.8.8.7,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-8.7.2,vc_responsive,elementor-default,elementor-kit-41156,elementor-page elementor-page-66870
Collage with headshots of 3 women and 4 men speaking on video and the colourful logo for the "Our Common Future" symposium

Diverse voices speak for digital rights, AI accountability

The race to develop ever-more advanced artificial intelligence applications is dominated by tech companies in the United States and China. Are human rights and benefits for all being ignored for gains in wealth and power for a few?

In the run-up to the symposium “Our Common Future: Advocating for Digital Rights and AI Accountability” in April 2026, WACC and co-organizers asked members of our networks to name a critical issue for digital justice.

The resulting video statements highlight both opportunities that digital technology brings and serious problems – from increasing use in war and conflict to racial bias and deepening information disorder – that call for urgent individual and collective action.

Accountability and transparency must guide AI's development

“So, as church, I believe that what we are doing is correct – to raise awareness, to educate persons, to hold each other accountable, to ensure that the world is a better place for every single one of us.”
—Merlyn Hyde Riley

Merlyn Hyde Riley, general secretary of the Jamaica Baptist Union and vice moderator of the World Council of Churches (WCC) central committee, acknowledges AI’s significant benefits while raising urgent ethical concerns about its unchecked development. 

AI must be transparent, accountable, and factual

“If all AI systems are giving priorities to non-facts or dealing with facts and non-facts, and facts and mis-disinformation as the same, then what is the content that we will be giving our kids and the generations to come in the future?”
—Rawan Damen

Rawan Damen, director general of Arab Reporters for Investigative Journalism, outlines three core demands: transparency in the disclosure of AI-generated or AI-assisted content; accountability from major technology companies over the use of data, copyright, and the spread of mis- and disinformation; and a commitment to factual information as the bedrock of any AI system. 

AI must not become an engine of systemic racism

“AI systems are inherently racist because they draw on a racially contaminated database where racialized communities are experiencing prejudice and discrimination at the hands of enforcers of systems.”
—Masiiwa Gunda

Masiiwa Gunda, WCC programme executive for programmatic responses on overcoming racism, highlights his concern that AI systems are reproducing and scaling existing historical information – much of which reflects centuries of racial prejudice, discrimination, and exclusion embedded in social structures, policies, and institutions.

AI must serve the many, not the few

“It challenges us to ask whose voices are shaping AI, whose realities are being represented and who is being left behind.”
—Rev. Jackline Makena

Rev. Jackline Makena, vice moderator of the WCC Commission on Faith and Order and lecturer at St Paul’s University (Kenya), argues that AI is not merely a technological challenge but a profound moral and justice issue.

Her central concern is that AI, if left ungoverned, will deepen existing inequalities – particularly across the Global South, where systems marked by economic disparity and political exclusion are already vulnerable to biased algorithms, surveillance, and data exploitation. 

AI governance must include the marginalized, not just the powerful

“This is only not about AI, it’s about whether digital transformation will empower people or leave them behind.”
—Kamal Sedra

Kamal Sedra, a digital transformation and cybersecurity specialist working across the Middle East and North Africa, argues that AI is rapidly reshaping power – often at the expense of vulnerable communities.

While governments across the region are investing heavily in AI for surveillance and information control, civil society, journalists, and minority groups are being left without the tools or protections they need. 

Our critical infrastructure is dangerously unprotected

“There’s really a significant gap in the legal mechanisms and in the policies and in the ability to respond to these kinds of attacks on critical infrastructure. I think increasingly the shape of war will look more and more like this.”
—Dr Erin Green

Dr Erin Green, a theologian, communicator, and digital justice researcher with a focus on AI, makes two observations about AI’s challenges today: There is a growing popular backlash against generative AI – people find it unsatisfying, addictive, and disruptive to work and mental health. And legal mechanisms are lacking when it comes to AI and protecting critical infrastructure like telecoms or energy systems.

AI is leaving the grassroots behind

“We affirm that communication is for community-building and communication is for life.”
—Vincent Rajkumar

Vincent Rajkumar, WACC Director for the Asia region, notes the stark reality that 40% of people across Asia remain unreachable by digital communication, due to poor network infrastructure, unaffordable devices, and inaccessible payment systems. While the rest of the world debates AI’s potential, vast grassroots populations are effectively “digital refugees.” 

Together for Digital Justice

Discover more from the symposium “Our Common Future: Advocating for Digital Rights and AI Accountability”
No Comments

Sorry, the comment form is closed at this time.