Can machines think? What feminism can teach us about ethical AI development beyond de-biasing
64993
wp-singular,post-template-default,single,single-post,postid-64993,single-format-standard,wp-theme-bridge,wp-child-theme-WACC-bridge,bridge-core-3.3.4.2,qodef-qi--no-touch,qi-addons-for-elementor-1.9.3,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-smooth-scroll-enabled,qode-child-theme-ver-1.0.0,qode-theme-ver-30.8.8.2,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-8.6.1,vc_responsive,elementor-default,elementor-kit-41156

Can machines think? What feminism can teach us about ethical AI development beyond de-biasing

Laine McCrory

A significant catalyst for the development of artificial intelligence (AI) research occurred when computer scientist Alan Turing asked the question: can machines think? In his research, Turing evaluated whether users could tell the difference between two different terminals – one of which was controlled by a human, and the other controlled by a machine. A machine passed this test (deemed the “Turing Test”) as long as it could convince a human user that it was interacting with a human rather than a machine.

Following this question, the next decades of computer science research examined the idea of how human intelligence can be replicated in algorithms. In 1956, the term “artificial intelligence” was coined as part of the Dartmouth Summer Research Project on Artificial Intelligence, a conference of leading computer scientists studying machine learning. From the Dartmouth Summer Research Project, numerous developments in computer science and engineering have occurred, highlighting that the seemingly rapid development of AI tools is the result of decades of research, theorization, and development that still aims to answer the question: can machines think?

Currently, a commonly accepted account of AI is found in the OECD’s formal definition, which was passed in March 2024 and forms the basis of many policy and research objectives:

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment (OECD, p.4).

This definition recognizes the variety of ways in which AI can be developed, as well as alluding to the potential outputs that the systems create, which may lead to harmful impacts. Most commonly, the harms of AI are presented as individual threats to autonomy and privacy that may lead to reputational damage, physical injury, economic loss, or discrimination. We have seen this proliferate with the risk of deep fakes, chatbots and misinformation. As a way to address these harms, initiatives from both industry and policymakers have worked to build ethical AI systems.

Building ethical AI

Within industry, ethical AI development focuses on designing systems according to the principles of Fairness, Accountability, Transparency and Ethics (‘FATE’), with industry actors taking the lead on how to design standards and practices that align with the principles. Industry approaches to ethical AI development can often occur as cooperative agreements between corporations – such as the Montreal Declaration on Responsible AI – or as individual values within companies themselves. However, these guidelines are often described as voluntary commitments, as they encourage responsible development without placing requirements and penalties for failing to develop ethical systems. They argue that because engineers are the most informed about the technical architecture of the systems, they are capable of addressing the individual harms that may arise.

In addition, advocates for industry-led approaches to ethical AI development highlight how the rapid pace of technological development often outpaces policy. Yet, these industry-led initiatives have faced criticism for relying on voluntary principles and framing harms as issues that can be solved through technical fixes, such as the FATE principles. Without a framework that requires compliance, critics argue that it is difficult to ensure that these AI systems will be built ethically.

On the other hand, there has also been an increase in policies that promote ethical AI development, both at national and international levels. The EU AI Act, which was passed in March 2024, has quickly gained prominence as a guideline for risk-based AI policies. The EU AI Act differs from voluntary agreements like the Montreal Declaration, to both promote innovation and minimize the risks of AI development. The Act assigns AI systems according to four different categories of risk: unacceptable-, high-, limited- and minimal-risk applications. By promoting different categories of risk, the Act imposes requirements for trust, transparency, and accountability upon the systems.

This differs from the voluntary approaches, as AI developers are required to comply with the Act, rather than able to choose to adhere to the goals of the Act. Those who support the AI Act argue that this requirement could lead to an increase in public trust of AI. However, certain critics are sceptical of the AI Act’s role in increasing public trust, as it does not address the immaterial harms caused by AI and the need for user agency.

Beyond De-Biasing Systems

While they are different in who leads, both of these approaches focus on promoting innovation while preventing individual risks. They reiterate the practice of debiasing as a process of mitigating biases in algorithms to align with certain principles, preventing the potential for harm. Yet in focusing on the risks at an individual and technical level, these approaches do not address the systemic harms ingrained in the technological and policy development processes. Rather than adhering to the definition of AI as proposed by the OECD, scholar of critical AI Kate Crawford argues that we must understand artificial intelligence as “both embodied and material, made from natural resources, fuel, human labor, infrastructure, logistics, histories, and classifications,” (Crawford, p. 8). She sees AI as impacting more than individuals, as these systems are designed as structures of power and control.

Crawford also argues that AI research is built on a myth that intelligence is something that can be quantified, captured, and measured independently from social and political realities. The harms of AI systems extend far beyond the individual, as these systems have been used to facilitate digital redlining of marginalized communities, misclassify people who do not conform to the gender binary, reinforce ableist stereotypes of disability, and deepen racial inequalities. As these systems are built on data that does not recognize the systemic biases inherent in it, AI technologies often reiterate harmful historical practices of marginalization and discrimination.

When such deep-rooted biases are only addressed through individual fixes in policy and technical design, they fail to recognize the pattern of collective harm. Thus, it is important to consider the limits of AI ethics discourses, and instead talk about power.

What can feminism teach us about ethical AI development?

In order to address the systemic harms caused by AI, there is a need to understand AI itself as a system of power, as described by Kate Crawford. Systemic biases cannot be “programmed out” as they are interconnected with every aspect of daily life, even if they have become invisible or normalized. Feminist research is particularly important in this field, as feminists argue that the personal is political, meaning that politics play a role in our lives in many different ways. Kimberlé Crenshaw, a legal scholar who focussed on the ways that Black women were marginalized within the legal system due to their existence at the intersection of race and gender, coined the term “intersectionality” in 1989 to describe how systems of overlapping power mean that different groups experience harm along multiple axes. Feminist research operates with a goal of exposing and challenging structures of power, a practice which has been embraced by feminist tech researchers such as Catherine D’Ignazio and Lauren Klein.

In their book Data Feminism, D’Ignazio and Klein highlight how tech development comes from histories of counting and classification that sought to control marginalized groups. They argue that feminist research on the histories and futures of AI must challenge these practices and build collective agency. A focus on intersectionality argues that as harm comes from multiple different axes, addressing one source of oppression – for instance, gender inequality – will not lead to justice for everyone, as there are many people who are oppressed along multiple lines. As such, approaches to addressing the specific risks of AI must attend to these intersectional harms by respecting and cultivating a diverse range of agency and expertise outside of the purely technical spheres.

One way to work towards this range of expertise is to use policy to promote digital citizenship as a way for users to actively engage with and impact the future of AI development. For Engin Isin and Evelyn Ruppert, the digital citizen is a distinct role that comes about through an individual acting as part of a broader collective to both learn about digital systems, and promote digital rights. Digital citizenship emphasizes how one should be actively involved in the creation of rights and responsibilities, rather than have perspectives universally imposed upon them. Therefore, for digital citizens to actively participate, policy must develop meaningful methods of engagement.

Digital citizenship involves two processes: enactment and inscription. Inscription involves how users claim rights through legal processes, while enactment refers to the role that users have in defining what these rights are. While inscription is particularly important, focussing only on the ways that digital citizens can be impacted by a system does not give them agency. Rather, there is a need to embrace the importance of digital citizenship as a participatory project, where policy supports enactment processes.

These enactment processes see individuals and collectives as essential to the policymaking process, arguing that if policy wants to address the systemic harms felt by marginalized groups within AI systems, these groups must be considered as central actors in the policymaking process through a process of meaningful engagement. Meaningfully engaging groups involves critically examining relationships of power and access, to highlight that within the AI ecosystem certain technical and policy voices are privileged over others.

In its current structure AI policymaking often happens within high-level groups consisting of a network of experts with very little representation from those outside of technical, industry and academic spheres, or within consultations and forums that are riddled with power imbalances. A feminist approach to building digital citizenship in the age of AI involves building from the participatory methods that promote equitable involvement and collective ownership, such as data trusts and stewardship, as well as participatory methods including mini publics, citizens juries, and community oversight. These processes involve putting digital citizens at the heart of decision making, while focussing on addressing barriers to participation through an equity-focussed lens that accounts for histories of marginalization.

Feminism can teach us a lot about who goes noticed and unnoticed in both AI development and policy. In her book Living a Feminist Life, Sarah Ahmed demonstrates how some of the most important work done towards social change can be generated by using feminism to call attention to problems that go unnoticed and connecting them to systems of power. Much of this work is done not by academics or engineers, but those who have lived experience and the knowledge to know what a community needs to flourish. In proposing a feminist account of the limits of ethical AI development as well as a method that these limits can be challenged, I reiterate the importance of critically analysing how our policy frameworks address the harms posed by AI. Embracing the importance of feminist research and participatory digital citizenship recognizes the importance of not only examining power but of developing strategies to shift power as well.

 

Laine McCrory is a Master’s student in the Joint Program in Communication and Culture at Toronto Metropolitan University and York University, and an incoming doctoral student at New York University in Media, Culture and Communication. Her research focuses on the intersection of feminism, policy and artificial intelligence. In addition to her academic work, Laine has been a Digital Policy Hub fellow with the Centre for International Governance Innovation, and is the founder of the Techno-Feminist AI Syllabus.

No Comments

Sorry, the comment form is closed at this time.