Deepfakes, cloned voices, and digital media literacy: AI’s role in the misinformation crisis in India
64980
wp-singular,post-template-default,single,single-post,postid-64980,single-format-standard,wp-theme-bridge,wp-child-theme-WACC-bridge,bridge-core-3.3.4.2,qodef-qi--no-touch,qi-addons-for-elementor-1.9.3,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-smooth-scroll-enabled,qode-child-theme-ver-1.0.0,qode-theme-ver-30.8.8.2,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-8.6.1,vc_responsive,elementor-default,elementor-kit-41156

Deepfakes, cloned voices, and digital media literacy: AI’s role in the misinformation crisis in India

Vamsi Krishna Pothuru

How the advent of generative AI has worsened the misinformation problem in India – and the responses from stakeholders, along with potential solutions for the future.

In recent years, misinformation aided by digital technologies has been a persistent problem. It has been proven to be detrimental to democratic institutions across the world and has been threatening the social fabric of communities. The misinformation problem has been worsened in recent years, and the AI usage has further complicated the issue.

The arrival of generative artificial intelligence (Gen AI) chatbots such as OpenAI ChatGPT, Google Gemini, Microsoft Copilot, and others has democratised the use of artificial intelligence. These chatbots are fuelled by large language models, and require simple text prompts to generate content, including text, images, audio, and video. Deepfake videos, one of the popular variants of AI-generated content, involve techniques such as face swap, lip sync, and puppet master. Similarly, there are textual deepfakes, audio clones, AI-generated images, and others. Internet is flooded with tools and apps that aid in creating these sophisticated forms of altered media.

This synthetic technology is being leveraged by actors ranging from large organisations to ordinary citizens for various purposes. The anonymity and scalability of this technology are aiding bad actors in creating more personalised, sophisticated, and convincing mis/disinformation. A few of the prominent examples of deepfakes worldwide include a 2022 video of Ukrainian President Volodymyr Zelenskyy asking his troops to surrender to Russia. Also, AI is being used to resurrect dead personalities; a few examples in recent times include a deepfake video of Indonesia’s late president Suharto and a deepfake video of the chief of the now-disbanded LTTE (Liberation Tigers of Tamil Eelam), Velupillai Prabhakaran. The motivations behind AI-generated mis/disinformation range from information warfare, propaganda, election campaigns, financial fraud, and personal vendettas to many others.

Unholy union of AI and misinformation:

Information disorder is the more academically rigorous term compared to the popular term “fake news”. The information disorder is categorised into misinformation, disinformation and mal-information. This categorisation is based on two factors, which are the facticity of the information and the intention of the person creating or sharing the particular information. But the arrival of AI led to the creation of a large number of representative images and satirical memes, which are not necessarily factual but still evoke the same emotions as real images. In the traditional sense, to create mis/disinformation, one uses unrelated picture or video along with false narrative to create more virality. However, with the arrival of AI, even ordinary people are creating representative AI images and spreading their false narratives as facts.

AI-generated fake images have the same kind of convincing nature, sometimes more, compared to unrelated images used in a conventional fake news story. Kiran Grimealla and Simon Chauchard argue in Nature that even though AI-generated images resemble animation, they still resonate with the emotions of the audience and persuade them to believe in the message. Similarly, Hany Farid, a professor at the University of California, said in an interview with NPR that these images are designed to push narrative and propaganda rather than being purely deceptive.

Most of the deepfake cases involve famous personalities as their photographs, speeches and videos are available in the public domain. It’s easy to create a false narrative around such altered multimedia content which can persuade audiences by resonating with biases and pre-conceived notions. While it is easy to debunk traditional fake news items just by citing the factuality clause, it is difficult to fact-check a representative image or audio clone apart from just pointing out that it is AI-generated. This is true in the context of audio clones generated by AI, which are difficult to fact-check and declare as fake with 100% certainty. Several studies indicate that audio clones are one of the popular deep fakes encountered across the world.

AI, misinformation and the ordinary citizen in India

In the year 2024, around 50 countries including India had general elections. It was expected that it could lead to the large-scale rise of election related deepfakes and AI-generated mis/disinformation. However, several studies indicated that the concerns about AI-induced misinformation during the election campaigns were overblown. While it is true to some extent, it is important to track the phenomenon and its future implications. Also, the impact of this technology varies across different societies. For example, in India, AI-generated misinformation should be approached very contextually. It must be understood by considering various factors, such as the ordinary people’s capacity to use the internet, vulnerability to online harms, awareness about deepfakes, and the array of actors behind misinformation campaigns.

According to the 2024 report by KANTAR and IAMAI (Internet and Mobile Association of India), more than half of the internet users in India are from the rural parts of the country. The Indian rural population has increasingly come online with less to no digital literacy. Their digital vulnerabilities have been worsened in recent years, led by online harms such as mis/disinformation and cybercrimes. “Digital Deception Index: 2024 report on deepfake fraud’s toll on India” by Pi-labs has revealed that deepfake-related cybercrimes in India have increased by 550% since 2019. “Digital arrest”, deepfake e-KYC, fake trading apps and investment apps endorse celebrity deepfakes are a few of the cybercrimes that have been shaking the country for the last couple of years.

The alarming levels of cybercrime threats persisting in India reveal deep-rooted vulnerabilities among Indians. It coincided with the mis/disinformation crisis over the years, which has resulted in mob lynchings triggered by cow smuggling and child kidnapping rumours, especially in rural parts of the country. It has also resulted in extreme polarisation in society along the lines of religious identity, nationalism, and other ideologies. Upon further probing the potential reasons behind such vulnerabilities, it is evident that a lack of digital literacy and a lack of knowledge about the avenues of authentic information could be a few reasons among many.

According to Vice (2020), one of the first instances of AI usage in an election campaign happened in the 2020 Delhi elections. A deepfake video of Manoj Tiwari, a BJP leader, speaking Hindi and Haryanvi spread across thousands of WhatsApp groups. This seems like a harmless voice clone, but the usage of AI in election campaigns has taken many forms in recent years. In the run up to the general elections 2024, a series of deepfake video of late M. Karunanidhi, of DMK party in Tamil Nadu, a southern state in India, were screened in public events. Similarly, there were deepfake videos on the of Indian actors such as Ranveer Singh and Amir Khan, asking people to vote for the Indian National Congress (INC) party. Similarly, there were several deepfake videos of various celebrities in India endorsing political candidates, promoting fake medicines, and advertising financial scams.

In India, there were several cases of such deepfakes triggering a lot of discourse around the implications of AI. In some cases, these instances even led to punitive actions. Few scholars argue that the fear of legal action has resulted in fewer cases of deepfakes in the recent elections in India. However, in some cases, the state may overstep with regard to freedom of speech and democratic dissent. In a recent instance, Smita Sabharwal, a senior IAS officer in the southern state of Telangana, was summoned and later transferred for sharing an AI-generated Ghibli image about the controversial land auction by the state government at the University of Hyderabad.

Currently, India doesn’t have a dedicated fake news law, but often invokes Bharatiya Nyaya Sanhita (BNS), 2023, the earlier Indian Penal Code to punish citizens for creating and sharing mis/disinformation. Similarly, Indian government uses Information Technology Act (IT Act) 2000 and its subsequent intermediary guidelines, IT Rules 2021, to address mis/disinformation and deepfakes in India. Along with punishment for individuals for spreading disinformation, this rules also casts specific obligations on social media platforms which are referred to as intermediaries.

Gendered and ideological disinformation

“Gendered disinformation” primarily targets women and gender minorities through doctored photos, obscene videos and false narratives to defame their character and undermine their credibility. At the community level, where patriarchal notions prevail, such disinformation has severe implications for women, which may involve physical threats and restrictions on freedom and movement. It also hinders women’s access to internet and education. Cheap fakes were one mode of such online harassment, but now AI is aiding this genre of disinformation with more personalisation, sophistication and anonymity. One example of gendered disinformation in India is the plethora of AI-generated soft porn images of Muslim women with Hindu men. This was revealed in an investigative report, “Zalim Hindu Porn”, by Aditya Menon from Quint, an India-based news platform. These scores of AI-generated images become an ideological tool for digital hate spread across social media in a coordinated manner.

Similarly, an exclusive report by “Decode” of Boom (digital journalism and fact-checking platform in India), revealed how text-to-image tools are being weaponised to generate hateful imagery around certain communities in India. A few such harmful representations in AI generated images include Muslim men as paedophiles, stone pelters and other stereotypes. In a socially diverse country such as India, there is the danger that such hateful trends will be reflected across gender, caste, ethnicity, and other identities.

Response and promises from stakeholders

Fact-checking initiatives have been the first line of respondents to misinformation in India. On the other hand, social media platforms have been collaborating with these initiatives in the areas of debunking claims on their platforms, digital media literacy for their users and capacity building for the factcheckers. Similarly, civil society organisations are working at the grassroots, looking for resources to address the AI variants of mis/disinformation in their communities. More concrete collaborations among these three is essential in the fight against AI variants of mis/disinformation. The following sections will discuss the response from these stakeholders and also potential areas of solutions.

Fact-checking initiatives as front-line respondents

Fact-checking units have been actively responding to the AI-generated misinformation, such as deepfakes and audio clones. One of the biggest challenges for fact-checking initiatives over the years is to make their fact-checking article accessible to the audience. Given the scientific nature of their fact-checking articles, it was difficult for an ordinary person to access or understand these articles. So, they have started converting these long articles into short-form multimedia content such as reels, flashcards, Instagram carousels, and other formats.

One of the reasons behind this is to make people familiar with the fact-checking tools and process, which will serve the digital media literacy purpose in the long run. In a positive trend, fact-checking initiatives in India recognised the importance of digital media literacy among their audience. Along with factchecks, they also started creating explanatory videos, fact-checking tutorials and other educational content for their audience.

In response to AI-induced mis/disinformation, there are a few initiatives that solely address deepfakes in India. Deepfake Analysis Unit (DAU) and Logically Facts in India are actively addressing AI-generated misinformation in India, especially deepfakes. DAU has a dedicated WhatsApp tipline for citizens to report deepfakes. DAU is a part of the Misinformation Combat Alliance (MCA), which is similar in its structure and purpose to the International Fact-Checking Network (IFCN). Most of the fact-checking initiatives in India are signatories of IFCN and the newly established Indian based coalition of MCA. One of the promising trends in the fact-checking ecology in India is the increased collaboration between fact-checkers and news publishers. Similarly, the SHAKTI coalition is one such collaboration that emerged to cover the 2024 general elections in India to combat election related mis/disinformation.

Such collaborations are critical in a multi-language country like India, which can facilitate the transfer of capacities and knowledge within the fact-checker community from different states. Also, collaboration among them could track the AI-generated information in India in real time. This knowledge will help in creating and deploying necessary tools to combat AI-generated mis/disinformation effectively. It also aids in creating digital media literacy resources for the audience and even capacity-building resources for fact-checkers and media professionals. The rapid developments of AI and its usage in the information landscape require fact-checkers and media professionals to be up to date with AI verification techniques. In a way, these collaborations also give a sense of community and equal learning space for the fact-checkers.

Social media platforms and information integrity on online spaces

The recent announcement by Meta to discontinue its third-party fact-checking programme in the USA has sent alarm bells across other countries, including India. Meta has nearly 100 fact-checking partners globally, and in India, it has 12 partners covering 16 languages. On the other hand, the X platform has been watering down “informational harm” policies and relying almost completely on community notes to address mis/disinformation. These developments pose a long-term, severe threat to information integrity on digital platforms.

Also, in recent years, fact-checking initiatives across the world have been facing a credibility crisis where these entities are being falsely appropriated to certain ideologies. Rappler, based in the Philippines, and Alt News in India are two such initiatives among many across the world that have to face threats from both the state and ideological groups. Amidst these conditions, social media platforms as responsible big tech must work with fact-checking community across country and aid in their efforts to fight mis/disinformation.

To address the rise of deepfakes in their platforms, social media platforms must ensure that watermarks or other authenticity indicators are provided on the content generated by AI tools. Such measures by technology companies can help in users distinguish between deepfakes and factual content. It is very important for ordinary internet users to distinguish between original content and content altered by AI on the social media they consume every day. With the tremendous reach to their audience, these platforms should provide AI indicators on deepfakes and incorporate digital media literacy content on their platforms. Such seemingly simple algorithmic measures are a positive step towards information integrity on online spaces.

Technology companies can collaborate with fact-checking initiatives to develop AI tools that can aid fact-checkers. Also, social media platforms should make their platform data more accessible to researchers, fact-checkers, and technology companies to develop free deepfake detection tools for the masses.

Civil society organisations and digital media literacy approach

Just imagine the kind of impact a deepfake video of a local politician inciting religious hatred can create in a small community like a village. How many of these first-time technology users know about deepfakes or AI? Do they have the capacity to verify or access/read fact-check websites? Do they have avenues of authentic information? Are the existing digital media literacy or AI awareness programmes contextualised for their literacy levels, local culture, and knowledge? These are some important questions that need to be addressed in order to combat AI-generated mis/disinformation at the community level. The proposed technological solutions, such as mere watermarking on AI-generated content, scientific fact-checking articles may not be a complete solution for communities.

Sophisticated misinformation variants such as deepfakes and audio clones may have relatively severe implications for rural communities. AI awareness is the first and most important step in making these communities safe in the current information ecosystem. This basic knowledge among individuals can induce healthy scepticism, which is the first step in this long battle against AI-generated mis/disinformation. Stakeholders must come up with a comprehensive and contextual digital media literacy approach to create resilient communities to these ever-evolving online harms. There are a few civil society organisations that are fighting mis/disinformation at the community level.

Digital Empowerment Foundation (DEF) is a Delhi-based non-profit organisation that enables opportunities for communities through digital tools and digital literacy. They have also been addressing the misinformation problem at the grassroots through digital media literacy toolkits which are contextually developed for the communities they work with. Through their comprehensive “Media Literacy Awareness and Action Plan”, they are using culturally relevant, peer-to-peer learning and hands-on training toolkits to make communities resilient against misinformation. They have also incorporated elements of how to recognise misinformation, fact-checking, and the concept of AI in their curriculums to make these communities aware of the implications of AI and misinformation.

Similarly, Ideosync Media Combine, currently running a programme called the “Bytewise Factcheck Fellowship” in partnership with Youth Ki Awaz, a citizen media platform in India. This digital media literacy programme equips school students aged 13-17 years with knowledge about the role of AI in mis/disinformation, digital tools to fact-check and other skills. Comprehensive media literacy training at a young age is a long but promising approach in preserving information integrity on online spaces. Also, Indian schools need dynamic curriculum which can equip students to become aware of mis/disinformation and deepfakes. One such successful programme is “Satyameva Jayate” (Truth triumphs), launched in 2021 by the Kerala government, a southern state of India. This programme aims at infusing responsible digital engagement in students, knowledge about mis/disinformation, and fact-checking skills.

These case studies reveal the importance of digital media literacy in fight against the mis/disinformation. DEF uses socially and culturally appropriated approach to impart digital media literacy for village communities. Apart from contextualising, they also use various strategies such as gamification, peer-to-peer learning which are instrumental in engaging communities in learning activities. Other stakeholders must recognise DEF’s approach in realising effective and personalised digital media literacy awareness for citizens.

AI as a potential solution:

AI detection tools have been emerging in recent years, which can be a potential solution for verifying deepfakes. There are a few publicly available free AI tools for the verification of AI content which are playing a crucial role in debunking scores of AI mis/disinformation generated on a daily basis. They include Hiya (deepfake voice detection tool), The Factual (source quality checker), Deepfake-o-Meter, Hive Moderation, Logically, Originality.ai, AI or Not, and others. These tools are helping both fact-checking initiatives and individuals to verify any AI generated information they come across online.

In an interesting case, Factly, a data technology company and a fact-checking initiative based in Hyderabad, has developed its own AI fact-checking tools such as ‘Sach’ and ‘Tagore AI’ that can assist them in fact-checking. Factly has developed Sach with the support of the Google News Initiative. Fact-checking initiatives have the advantage of observing the creation and spread of deepfakes in real time. Their knowledge about variants of deepfakes and fact-checking techniques are valuable assets in developing AI tools in combating deepfakes and mis/disinformation effectively. Similarly, social and cultural knowledge of civil society organisations at the grassroots can help in further contextualising such AI tools.

Conclusion

Even though the various studies indicate that the concerns around AI-generated misinformation are overblown, it is necessary to foresee the implications of this technology in the future and track its evolution in the present. With the rapid evolution of generative AI, it is imperative that the strategies to combat deepfakes should evolve too. It is also necessary that these strategies and solutions must consider the rural populations due to their increased vulnerability to online harms. The advent of AI has resulted in not only the mass production of deepfakes but also resulting in evolving strategies that can create more sophisticated mis/disinformation.

In this context, technology companies are responsible for sharing the data around emerging AI technologies with researchers and academicians. Also, stakeholder collaborations could lead to effective strategies to create free and easy-to-use AI detection tools for citizens by harnessing AI. Finally, community centric approach of digital literacy programmes, which are socially and culturally contextual, promises resilience against ongoing information crisis.

References

Bond, S. (2024, December 21). How AI deepfakes polluted elections in 2024. NPR. https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections

Christopher, N. (2020, February 18). We’ve just seen the first use of deepfakes in an Indian election campaign. VICE. https://www.vice.com/en/article/the-first-use-of-deepfakes-in-indian-election-by-bjp/

Digital Empowerment Foundation. (n.d.). Media information literacy initiatives. https://www.defindia.org/media-information-literacy-initiatives/

Garimella, K., & Chauchard, S. (2024, June 5). How prevalent is AI misinformation? What our studies in India show so far. Nature. https://www.nature.com/articles/d41586-024-01588-2

Kantar & Internet and Mobile Association of India. (2024). Internet in India 2024. https://www.iamai.in/sites/default/files/research/Kantar_%20IAMAI%20report_2024_.pdf

Menon, A. (2025, March 7). ‘Zalim Hindu’ porn: How AI is mass producing pornographic images of Muslim women. The Quint. https://www.thequint.com/news/politics/artificial-intelligence-muslim-women-hindutva-soft-porn-images-facebook-instagram

Press Information Bureau. (2025, April 4). Government of India taking measures to tackle deepfakes. https://www.pib.gov.in/PressReleasePage.aspx?PRID=2119050

Rebelo, K. (2024, October 14). Exclusive: Meta AI’s text-to-image feature weaponised in India to generate harmful imagery. BOOM. https://www.boomlive.in/decode/exclusive-meta-ais-text-to-image-feature-weaponised-in-india-to-generate-harmful-imagery-26712

 

Vamsi Krishna Pothuru is a PhD student in the Department of Communication at the University of Hyderabad, India. Under the supervision of Prof. Kanchan K. Malik, his research examines information disorder in Indian villages and the responses of various stakeholders, including civil society organizations. One key area of his research focuses on a community-centric approach to digital media literacy interventions aimed at addressing misinformation. Before beginning his PhD, he worked as a fact-checker at NewsMeter, an IFCN-certified media house in Hyderabad, India.

No Comments

Sorry, the comment form is closed at this time.