What’s AI doing to information airways in conflict and crises?
61585
post-template-default,single,single-post,postid-61585,single-format-standard,bridge-core-3.3.1,qodef-qi--no-touch,qi-addons-for-elementor-1.8.1,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-child-theme-ver-1.0.0,qode-theme-ver-30.8.1,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-7.9,vc_responsive,elementor-default,elementor-kit-41156

What’s AI doing to information airways in conflict and crises?

Helen McElhinney

Recently, the UK Foreign Commonwealth and Development hosted a conference on the risks and opportunities of AI on humanitarian action. I kicked off my presentation on disinformation by asking the senior policy makers gathered a few questions about basic social media literacy: Can you reverse image search? Do you report concerns to platforms? The good news is most nodded. We are well over a decade into our “social media age”, after all. The bad news is a new dimension to this old problem has now made it much more complicated, particularly for people affected by conflict and crises.

 Information can be lifesaving in a crisis

People want to know which areas are safe, where their loved ones are, where to flee or seek medical assistance, or how to access aid. The ability to do so constitutes the basic health of information airways. The CDAC Network, where I’ve been Executive Director since October 2023, has long advocated for safe and trustworthy information for people in crises as a form of aid itself with early operational support dating from the Haiti crisis of 2010. We call attention to the health of information airways in crises and conflict, for support to media, and protection of communication channels to support people to make informed decisions and have their voices heard.

Evidence has been gathered that the impacts of the degradation of the information environment can be acute in conflict and crisis settings, where trust is already significantly strained. In Syria, AI-enabled bots were used to flood social media with content that spread confusion and mistrust among people caught in the conflict; in Myanmar, AI-promoted mis- and disinformation was utilised by state actors to fuel elements of genocide in 2018 and continues to be used to inflame intercommunal violence. In Tigray, Ethiopia the European Institute of Peace concluded “the people who suffered the brunt of the fighting became the “casualty” of misinformation, disinformation and biased reporting.

More recently, we watched a sophisticated disinformation operation precipitate further conflict in Ukraine in February 2022, and high level contestation of narratives, enabled by AI-capabilities, is underway in the Occupied Palestinian Territories. In most humanitarian crises, disinformation now runs alongside.

So what’s changed? AI-enabled disinformation

The creation of synthetic content is skyrocketing contributing to a growing pollution of the information environment. As of August 2023, there were around 16 million hyper-realistic fake images online. Large language models (LLMs) can generate 20 tweets in just five minutes, enabling “personalised persuasion” at scale. Just recently, Google announced the integration of AI advancements in video and images, and OpenAI announced Sora which can generate video from text, making it easier than ever for anyone to produce incredibly convincing content.

Information warfare is old news. But today, sophisticated disinformation campaigns can be launched at terrifying speed and scale for a relatively low cost. These decentralised AI-enabled information operations are new. Disinformation-for-hire, often “computational propaganda” by private companies, has been emerging in recent years with state and non-state actors using it to fuel tensions, influence elections and push false narratives. These efforts are often well disguised as organic content, contributing to a growing architecture of deceitful information.

Information operations, within certain limits, are not traditionally a breach of international humanitarian law, or the rules of war. States must respect international humanitarian law and mitigate unnecessary harm, even when contracting a third party. This presupposes there are sufficient means to identify disinformation, anticipate its potential harms on civilians and sufficient methodologies to track and triangulate that causation. International law experts are grappling with this and wonder might new capabilities tip that balance?

Understanding the effectiveness of disinformation is not easy. Identifying the existence of manipulated methodologies online is a challenge as they are designed to hide in plain sight. The prevalence of closed platforms such as WhatsApp or Telegram can make them difficult to detect, even before attempts to assess the veracity and intent of the content. The general sense is that the most compelling fake content tends to be close to the truth, making it harder to identify and its origins sometimes impossible to source.

AI-enabled disinformation is only a problem for those online, right?

A key function of disinformation is to influence public discourse, whether people affected are online or not. This is done by promoting narratives that frame events in a way that is beneficial for the malign actor. It is vital to be able to understand what narratives are out there, how they are being promoted, whether they are being seen, and – crucially – how to respond in time. Often fragile and crisis affected locations are testing grounds for AI-driven disinformation campaigns. In places where official media was never trusted, the reliance on social media makes disinformation on those platforms pernicious.

Narratives can determine how minorities and crisis affected communities are perceived; how resources are distributed from donors and governments; and who is held responsible for inevitably insufficient responses. The Overseas Development Institute, exploring this in the context of the humanitarian sector concluded, “Narratives and frames have greater influence on policy change than facts and figures.”

 So what does this mean for humanitarian action? 

If the operating environment continues to distort communication channels with communities we must adapt. Although aid remains far from demand driven, the humanitarian sector has made commendable progress in commitments to greater accountability to affected people in recent years. We must update our understanding and capabilities to listen for and respond to genuine feedback despite the noise and manipulation of disinformation. We must deliberately seek out genuine criticism and frustrations from communities, as well as actively try to understand what they need and prefer.

Humanitarians are in a tough spot. We lack the means to help people discern reliable information at scale, and at the same time trust is eroding rapidly. We can see trends are worsening, that crises are increasingly hyper-manipulated, and we are sometimes caught up as direct targets of disinformation campaigns. This was reinforced to me in 2022 as an undercover journalist uncovered what humanitarians couldn’t: a private company had been hired to smear and discredit a large and reputable humanitarian organisation. These direct attacks on aid operations undermine credibility and trust in humanitarian action broadly.

Journalism is vital in creating reliable content and promoting healthy information landscapes. Yet journalists are under unprecedented threat in conflicts. More than 100 journalists have been killed in Gaza since October 2023, while in Sudan citizen journalists are facing internet shutdowns as atrocities mount. Crises disrupt public service journalism: journalists flee and livelihoods disappear, the ability to verify sources in a hyper-synthesised world also slows down reporting, leaving a gap often filled by less reputable sources. As colleagues at BBC Media Action reflected recently, getting it wrong once can mean losing trust built over years.

At the same time, our collective ability to identify and resist disinformation online is diminishing. Content moderation was always difficult but has worsened, as noted in a recent Internews report. Major social media platforms have laid off trust and safety teams as belt tightening measures – roles responsible to tackle deliberately manipulated content. Some platforms have also restricted academic access to data, meaning robust analysis can be out of reach for tech outsiders. The sophistication of video or image-based synthetic media products can be much harder for humans to analyse and assess.

Yet the AI systems, which replaced staff, often struggle to grasp nuance and slang used to evade detection. AI tools work best in English and other major languages, because that is how the foundational AI models were built. Although people are developing workarounds, for the foreseeable future it will be easier to catch an attempt to spark a run on a bank in New York than a massacre in Africa.

Who’s paying attention to this new level of phenomenon? Who needs to?

This month, the UN published Information Integrity principles which provide a timely diagnosis of the wider problem in the information ecosystem. Although not fully focused on crises and emergencies, CDAC Network members provided input in the consultation. The principles signal a set of actions we can all rally around while balancing the critical right to information and freedom of expression.

Several CDAC Network members – especially those in media development – have been leading in tackling misinformation in emergencies for years. Their efforts prompted the publication of CDAC’s ‘Rumour has it: a practice guide”. Members such as lCRC led the way commissioning work on the impact of harmful information on the safety and security of people in armed conflict, and the UN agency for Refugees is also leading a body of work examining the impacts on those seeking refuge or displaced.

Our Community of Practice on Harmful Information offers a space for members to share challenges and planned efforts in response. Most recently, the group developed an accessible tipsheet for spotting harmful information in crises. Our discussions have also introduced ideas for ways to move forward.

What might locally-led solutions look like?

  • We focus on listening to communities affected by crises. While there is a growing awareness of the impact of harmful information on humanitarian operations, understanding the explicit impact on crisis affected communities themselves is even more crucial. Information is a form of aid and means of protection in itself. How do communities themselves assess information, ascribe confidence levels and develop trust in sources? A recent CDAC panel at the Humanitarian Xchange, discussed one of the major drivers of people sharing mis/disinformation “is that they don’t feel seen, they don’t feel heard, they don’t feel acknowledged. The opposite of mis- and disinformation is not facts, it’s acknowledgement.
  • We break out of silos and share analysis: Specialists conducting disinformation analysis for commercial, or political purposes, may be able to make findings available for humanitarian action. Humanitarians are contracting new capabilities to spot narratives being tested in conflict and crises settings, to track uptake and impacts. Could we make this available to local communities and local NGOs? Such tools, data and the skilled people needed to wield it are expensive.

 

Can we reframe the problem?

  • Better diagnoses can improve the health of information ecosystems in crises. Is disinformation worse in Ukraine, Gaza or Sudan? Is it better now than it was last year? If the presence of manipulated narratives has been identified at scale, have these been shared publicly or deliberately with communities affected directly? Could we develop classification tools to provide alerts when spikes in disinformation occur, similar to the Integrated Food Security Phase Classification? How can the new IPIE analyse conflict and affected contexts?
  • There are strong economic and financial drivers around disinformation. The creation and distribution of false information is often done by young people on the ground, including some in refugee camps or crisis settings, who are paid by funders based elsewhere. Data and information are now – like oil, gold, and diamonds – a commodity that drives conflict. Code for Africa have reframed “disinformation-for-hire” as organised crime and there is a growing movement to advocate to demonetise and deplatform those who provide such services.
  • There’s a clear need for human moderators, especially for underrepresented languages and dialects. But the complexity of the task is increasing, and the wellbeing and labour rights of these moderators – many of whom from the global South – are of increasing concern as I learned directly from a brave content moderation whistleblower in Nairobi recently. Exciting initiatives such as the Data Workers Inquiry are seeking to support content moderators to organise. Perhaps it’s time to grow a movement for “fair trade” social media.

It’s always good to keep talking and sharing in this fast evolving space. If you want to follow our Network’s efforts on this more closely, please get in touch.

 

Helen McElhinney is Executive Directors of the CDAC Network – the global alliance of organisations working to ensure people can access safe, trustworthy information and communicate during crises. Previously a humanitarian aid worker and civil servant, Helen held senior advisory roles within DFID and FCDO, and worked at posts in Sana’a and Riyadh and across HMG for more than a decade. In recent years, Helen joined the International Committee of the Red Cross as Head of Policy for UK and Ireland. Helen has a Master’s degree in International Relations from the University of Cambridge, an LL.M in Human Rights Law from the University of Glasgow and an LLB from the University of Strathclyde.

 

Misinformation, disinformation and hate speech (MDH)

The AI-generated images of Pope Francis that fooled much of the internet was created in 2023 by the AI programme Midjourney.

The AI-generated images of Pope Francis that fooled much of the internet was created in 2023 by the AI programme Midjourney.

Hate speech, according to the working definition in the United Nations Strategy and Plan of Action on Hate Speech (2019), is “any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor”.

In the view of the International Committee for the Red Cross (ICRC): “Misinformation and disinformation can increase people’s exposure to risk and vulnerabilities. For example, if displaced people in need of humanitarian assistance are given intentionally misleading information about life-saving services and resources, they can be misdirected away from help and towards harm.

Misinformation and disinformation can also impact humanitarian organizations’ ability to operate in certain areas, potentially leaving the needs of people affected by armed conflict or other violence unmet. ”

 

No Comments

Sorry, the comment form is closed at this time.