Reframing AI governance through a political economy lens
59512
post-template-default,single,single-post,postid-59512,single-format-standard,theme-bridge,bridge-core-3.1.7,woocommerce-no-js,qodef-qi--no-touch,qi-addons-for-elementor-1.7.1,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,columns-4,qode-child-theme-ver-1.0.0,qode-theme-ver-30.4.2,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-7.6,vc_responsive,elementor-default,elementor-kit-41156

Reframing AI governance through a political economy lens

IT for Change 2023

While AI is not new to the field of computational science, the release of ChatGPT by Open AI in November 2022 marked a watershed moment. In a short time, the tech sector has rolled out Large Language Models (LLMs) and other Generative AI (GenAI) initiatives with rapid succession.1 Recognizing both the concerns and opportunities that AI poses, regulators, and policy makers too have been addressing the conundrums confronting a new AI-mediated future.2

Appropriate policies and laws to mitigate risks and harms related to AI deployment are vital. However, this is not enough. A just and equitable AI paradigm hinges on the radical restructuring of the global regime of knowledge, innovation, and development. This requires a structural justice approach to AI governance that is able to articulate the pathways for multi-scalar institutional transformation.3

I. Key issues for AI governance

1. Big Tech’s all-encompassing hold over AI. The ecosystem fostering AI continues to heavily involve Big Tech players.4 These corporations invest in research and development on a scale that is unmatched, brokering lucrative partnerships with startups and governments.5 Given their incomparable market power, these powerful actors are able to exercise an entrenched infrastructural and narrative power over the sector. This translates into the ability to gatekeep access, an outsized agenda-setting voice at the table, as well as the wherewithal to actively shape policy discourse, influence rulemaking, and circumvent enforcement.

2. Structural barriers to a Southern-led AI. In the AI economy, dominant tech companies and infrastructure are either American or Chinese. This bipolar geo-economic context creates new dependencies – extending to foreign investments and debt, digital infrastructure, talent pool and intellectual property6 – for countries of the Global South, who struggle to assert their sovereignty meaningfully in the AI space.7 In addition, current trade rules on data and e-commerce being framed at the WTO and other regional/ plurilateral agreements continue to bind developing countries to de facto rules that stifle their digital policy space and prohibit meaningful evolution of their data economies.8

3. Subversion of justice in the ‘Responsible AI’ discourse. The ‘Responsible AI’ framework has garnered immense institutional power, becoming the aspirational norm for policy making. It is championed by the OECD, international development agencies as well as the private sector. But this discourse sidesteps the power and resource imbalances characterizing the current AI paradigm. Often co-opted by the powerful to diffuse accountability, evade liability, and disregard rights, responsible AI ends up as benign ‘product safety’ considerations.9 The focus on risks and individual redress in the legal regime tends to marginalize the here-and-now concerns of actual harm to people, society, and our habitat and natural ecosystems. Institutional readiness to enforce audits or demand transparency from corporations about the algorithmic process is lagging behind.

4. Systematic evasion of transparency. Despite being constantly evoked in ethics guidelines, transparency continues to be an elusive piece in current AI practices. AI companies fail to disclose proactively how their systems work,10 trivializing transparency to hasty and rudimentary audits or post-facto redressal of harms. These approaches, while necessary, are insufficient given that they can only trace the single hostile/offending element of a system as an error or malfunction but cannot unravel the decision-making process and factors that inform the cycle of input, output, outcome and impact, which is where responsibility and accountability can be located.

5. Perceived ‘ungovernability’ of AI. By framing AI regulation as a ‘blank slate’ where the old normal does not apply, corporations and governments distort its necessary basis in public deliberation. Further, technical obfuscation and assertions of an inherent ‘unknowability’ of AI shroud the policy discourse.11 While there is indeed a need for attention to AI-specific regulation, AI exceptionalism only furthers the myth of ‘ungovernability’. It decouples the object of AI regulation from the basic maxims of harm prevention, accountability, and transparency as has been historically applied across socio-economic policy sectors. Additionally, blank slate approaches disregard precedents from the domains of competition law, consumer welfare, data governance, corporate governance etc., that already exist, and could extend effectively to this space.

6. Absence of civic-public interest and inclusivity in the AI agenda. Marginalized communities and groups including women, racial and sexual minorities, small producers, workers, and indigenous communities are largely excluded from the decision making around AI, whether in the determination of priorities, design and deployment, or policy and rule-making.12 This happens in many ways. First, the AI discourse is wrongly framed as an elite, technical issue, negating its civic-public basis. Second, the absence of shared language and platforms to engage with AI as a larger societal process renders it impossible for the vast majority to understand and interpret the impact and consequences of AI meaningfully. Third, the epistemology driving AI is largely rooted in Eurocentric thought and traditions of liberalism, which while having desirable aspects, often eliminate alternative knowledge frameworks of Southern and indigenous people.13

Finally, there is also a high risk of the inherent bias and glaring omissions in data sets becoming reified into ‘objective’ truths, denying the meaningful representation of the Majority World in the AI paradigm. Paradoxically, the arc of AI innovation continues to exploit people from these countries to feed an extractive data economy – for iterative improvement of corporatized AI systems that lock innovation and for monetization of user attention.

7. Complexity in AI geo-politics. As a dual-purpose technology, AI is at the center of both strategic and development objectives for nations. This creates multiple pushes and pulls for multilateral governance and norm setting. For one, with the concentration of AI finance, resources, and talent in the US and China, geopolitics and geo-economics, today, continue to be a crucial force in determining the course of future AI development for the rest of the world.14 This threatens to fragment the digital policy space and deter global policy consensus for norm building and leave smaller nations in the wind. Competing visions of development also characterize national visions of AI and how the balance between individual rights and social good is calibrated. Further, the escalating militarization of AI and its significance for national security is also likely to influence how governments, especially in the Global North, assess its value and consequently frame its regulation.15

8. Worrisome inattention to risks from AI models. Spurred by an efficiency argument and time-to-market considerations of current corporate VC-backed efforts, advanced AI models are being adopted at record speed, undermining risk assessment. AI innovation operates in a regulatory Wild West, ignoring knowledge gaps on risks stemming from unreliability, misuse, and systemic issues.16 A culture of impunity that disregards potential degradation in service standards, larger margins of error, and probability of active harm with cascading effects, marks the field. For instance, frontline workers are increasingly advised to trust and follow AI systems and mistrust their own experience, judgment, and discretion. The locus of accountability for error or any failure is firmly pinned on workers with little or no power in the system, while the technical prowess of the models is defended and given a wide berth.17 The scope for large scale breakdowns such as the Robodebt debacle in Australia originate in the negligence of due diligence process in AI innovation.18

Additionally, loss in information and data integrity is now exacerbated by AI tools, which are not always capable of detecting falsehood and can thus end up replicating the same.19 The prevalence of AI-fueled disinformation threatens the safety of vulnerable groups and erodes trust in the digital public sphere. Emerging policy approaches towards ‘derisking’ AI are only looking at specific risks in silos (for instance discrimination, bias or disinformation) without addressing a) the profit motives that drive the uncritical adoption of technologies in a winner-take-all data economy, or b) understanding that AI-based risks are not experienced in isolation but are interconnected with structural issues.

9. Capture of AI public/commons. Open source models can play a pivotal role in democratizing access to AI technologies. But within the current landscape, resources and investments available to open source efforts are overwhelmingly controlled by Big Tech. Most open source generative AI models work in partnership with big tech companies (for instance, OpenAI with Microsoft, Anthropic with Google, and Stability with Amazon), either dependent on their funding or their compute power (cloud infrastructure/hardware) or their training data, to achieve scale.20 Even when startups build on LLM models and develop open applications, ultimately, they enrich the ecosystem of large private players.

These issues extend to public goods/national AI initiatives as well, creating a situation where innovation ecosystems are under siege and not able to evolve independently and for applications in public AI ecosystems. Big Tech dominance also gives rise to maximalist regulatory approaches that set regulatory burdens targeted at highly capable models, but apply to all actors in the ecosystem, creating a lopsidedness that condemns smaller players to failure or stagnation and allows Big Tech to solidify its advantage.21

10. Absence of sustainability considerations. The biggest threat posed by the current trajectories of AI development is an exacerbation of the environmental crisis. Emerging evidence seems to suggest that AI may be more of a problem than a solution to our struggle against climate change, water shortages and high energy consumption.22 Some estimates suggest that the water consumption in training Open AI’s large language model GPT 3 was equivalent to the amount taken to fill a nuclear reactor cooling tower.23 Even start-ups and technology developers working for a more ethical and transparent AI industry are struggling to address the sustainability challenge.24 But questions and considerations of ecological impact have largely been missing from the AI governance conversation, even as its massive carbon footprint looms over the world.

II. Recommended directions for AI governance – Regulating for the AI we want

AI governance must be oriented towards human-centric innovation, epistemic justice and regenerative development. This means adopting a systemic approach that includes:

  • Dealing with the structural imbalances that shape a highly unequal AI paradigm, and reining in Big Tech that currently controls the playing field;
  • Adopting a feminist and intersectional approach to data ethics that is attentive to algorithmic discrimination and data minimalism to prevent the undue datafication of bodies and communities already subject to hyper surveillance by state and markets alike;
  • Safeguarding and ensuring data integrity in AI systems so a trustworthy, credible and fact-based information ecosystem can operate;
  • Striking a balance between preventing potential harms and fostering innovation and equity;
  • Shifting from risk reduction to advancing strong institutional frameworks for audit and enforcement;
  • Designing a multi-scalar governance model with justiciable rights, norm-building at the multilateral level, and room for contextual local implementation;
  • Programing sustainability considerations in AI development to tackle extractivism, hyper-consumptive models and other downstream effects of AI;
  • Legitimizing a role for public authorities and democratic governance mechanisms.
  • Realizing AI’s transformative potential needs attention to both democratic and distributive integrity.25 Specifically, this would include the below elements in shaping AI governance.

 

1. A supra-liberal framework for AI governance. As AI raises new questions about the nature of personhood and categories of rights holders, the adequacy and appropriateness of the current human rights approach is called into question. Prevalent rights frameworks will need serious reflection and updation to address the challenges of our current moment. Going beyond a universalist, liberal rights framework, a supra-liberal formulation that addresses historical and contextual injustices will provide direction to a wider and post-anthropocentric view that is respectful of collective rights and natural ecosystems.

This would involve reforming multilateral processes to usher in an international regime for AI that is cross-cutting as well as a cross-sectoral effort to redefine rights regimes in areas such as food security, health, environment, welfare, gender equality, etc. Our techno-social future needs a global to local institutional revamp so that AI is guided to serve the goal of enhancing the capabilities and aspirations of all individuals and communities.

2. Public mechanisms and standards to operationalize ‘Responsible AI’. Truly responsible AI frameworks must be grounded in contextual accountability to concretely answer: Why this AI? What does it do? How is it imagined and for whose benefit? To this effect, the following aspects must inform AI governance efforts:

Transparency measures that meet a high threshold of explainability

  • periodic audits and assessments of AI models, published in public domain databases that disclose instances of serious failures and explain the steps taken to remedy the situation;
  • mandatory proactive disclosures requiring documentation of design and deployment considerations in AI, including details about parameters;
  • using post facto adequation as a standard in AI-assisted decision making by public bodies, which includes building systems with the capacity to flag relevant information to verify the machine’s inferences as well as obligations on public authorities to record justifications while using these systems;26
  • public domain resources and archives in simple communication, including in non-mainstream languages and accessible formats;
  • meaningful access to critical data sets, APIs and source code for public interest action.

 

Accountability geared at harm prevention over redress

  • upward accountability tracing that goes beyond identification of malfunctioning code, with the duty of care pinned on the most powerful actor/s in the ecosystem;
  • global benchmarks and standards on due-diligence and risk assessment, as well as acceptable margins of error for models;27
  • mandatory fundamental rights impact assessments that factor in rights violation prior to AI model building rather than post-facto;28
  • independent national bodies for democratic deliberation and civic oversight to bring in views of diverse groups of stakeholders to determine the necessity and appropriateness of AI deployment in various contexts;
  • critical literacy and popular science initiatives by educational institutions to encourage a culture of public engagement in AI.

 

Inclusion beyond an abstract idea of ‘fairness’

  • non-discrimination objectives/benchmarks in legal frameworks that guarantee harm prevention, outline remedies and promote equity and inclusion within AI systems;29
  • hard-coding representativity and inclusivity through techno-design measures that address implicit bias and outcome inequity such as, for instance, synthetic data (to correct gender and race based data gaps and bias in training data sets30 and lower-bound constraints that account for intersectional bias;31
  • consultative mechanisms to better inform multilateral AI governance, so that perspectives of the Majority World, including from oral cultures and indigenous communities, are able to inform the values that underscore AI development and governance.32

 

3. A diverse and inclusive AI commons. To break data and compute power that leads to the concentration of AI resources in a handful of private corporations and create the enabling conditions to catalyze AI innovation, an AI commons approach grounded in collective rights is urgently needed. Interventions need to be multiscalar and include the following measures:

  • A global center on AI innovation to address pressing development challenges. This can build on prototypes such as the European Organization for Nuclear Research (CERN) and the International Space Station.33
  • Public financing for AI. Public funding mechanisms through Overseas Development Assistance (ODA) commitments and international and regional financial institutions are vital for AI research in developing countries.
  • Reform of the IP regime to address challenges of data extractivism. A range of possibilities must be explored, including:
    • strong institutional safeguards to protect social sector data sets, especially where there is a risk of proprietization of core development functions through AI models (such as in health, education and welfare);
    • conditional access to public domain and open government data, with inclusion of purpose limitations and clear sunset clauses on use;
    • fair use limitations on how models learn from and use training data to specifically prevent profiteering through reuse, including through strict stipulations against free-riding and the development of substitutive value propositions;34
    • new collective licensing proposals that balance the moral rights of creators (of the inputs that feed AI systems) with values of intellectual commons as public heritage;35
    • reciprocity guarantees in common data pools, where private model developers who build on public data layers have an obligation to share back and enrich the commons.36

 

  • A culture that promotes participation. Incentives and infrastructures must be created for communities to actively input into the creation of data-sets, algorithmic schema, and the formulation of use-cases at national and sub-national levels. Policies must also promote development of GenAI models in non-mainstream languages and cultures, from large scale public initiatives to smaller community-driven initiatives.

Submission to Call for Papers on Global AI Governance by UN Tech Envoy’s office for the first meeting of the Multistakeholder Advisory Body on AI. IT for Change (2023). Authored by Anita Gurumurthy – anita@itforchange.net and Deepti Bharthur deepti@itforchange.net

Notes

1. Open AI’s more advanced GPT-4, Meta’s LLaMA 2, Elon Musk’s xAI, Google’s Bard and JD’s ChatRhino, to name a few.

2. Over the last year, we have seen a number of initiatives from policy makers.

The UK government announced plans to invest GBP 900 million in a cutting-edge supercomputer, as part of its AI strategy towards a BritGPT.

The European Commission has made substantial amendments to the Artificial Intelligence Act policy, to specifically account for generative AI concerns in its last iteration.

In 2022, the European Commission also published a proposal for a directive on AI liability that would create a ‘presumption of causality’, to ease the burden of proof for victims to establish damage caused by an AI system and give national courts the power to order disclosure of evidence about high-risk AI systems suspected of having caused damage.

Canada has added the Artificial Intelligence and Data Act (AIDA) to its Digital Charter Implementation Act, a bill originally aimed at updating its data protection laws

China has introduced a set of Interim Measures for the Management of Generative AI

The US has opted for voluntary compliance with the White House’s AI Commitment, to which many Big Tech companies are currently signatories

3. This paper builds on some key threads from a roundtable on ‘Reframing AI governance through a political economy lens’, convened in June 2023 by IT for Change and Transnational Institute. The hybrid event brought together scholars, activists and practitioners to examine the building blocks of a transformative approach to AI governance. The discussions, which are summarized in this paper and extended upon, engage with many of the questions outlined in the UN Tech envoy’s call for papers on Global AI Governance towards informing the preparation for the first meeting of the Multistakeholder Advisory Body on AI.

4. Kak, A. & West, S.M. (2023). AI Now 2023 Landscape: Confronting Tech Power. AI Now Institute. https://ainowinstitute.org/2023-landscape.

5. Srnicek, N. (2022). Data, Compute, Labour. In M. Graham, & F. Ferrari (Eds.), Digital Work in the Planetary Market (pp. 241-261). The MIT Press;

Widder, D. G., West, S., & Whittaker, M. (2023). Open (for business): Big Tech, concentrated power, and the political economy of open AI. SSRN. http://dx.doi.org/10.2139/ssrn.4543807.

6. Hassan, Y. (2023, September 25). AI is Africa’s new growth mantra, but can it fix development?. Bot Populi https://botpopuli.net/ai-is-africas-new-development-mantra-but-can-it-fix-development/;

Miller, K. (2022, March 21). The movement to decolonize AI: Centering dignity over dependency. Stanford University Human Centered Artificial Intelligence. https://hai.stanford.edu/news/movement-decolonize-ai-centering-dignity-over-dependency

7. Yu, D., Rosenfeld. H, Gupta, A. (2023). The ‘AI divide’ between the Global North and the Global South. WEF. https://www.weforum.org/agenda/2023/01/davos23-ai-divide-global-north-global-south/

8. Gururmurthy, A. & Chami, N. (2021).Building Back Better with E-Commerce: A Feminist Roadmap. IT for Change. https://itforchange.net/sites/default/files/1981/Building-Back-Better-with-e-Commerce-A-Feminist-Roadmap.pdf

9. McKendrick, J. (2022, October 12). Everyone wants responsible Artificial Intelligence, few have it yet. Forbes https://www.forbes.com/sites/joemckendrick/2022/10/12/everyone-wants-responsible-artificial-intelligence-few-have-it-yet/?sh=331c91f1f10c

10. For example, those that target or deliver employment ads to particular people do not disclose how they spread their budget or weigh it against relevance, making it hard to know when job seekers are affected and how to prevent discrimination. Datta et al. (2014) found that setting users’ profile gender to ‘Female’ resulted in fewer instances of ads related to high-paying jobs, but they could not determine what caused those findings due to limited visibility into the ad ecosystem. They note that Google’s policies to serve ads based on gender meant that one cannot be certain whether this outcome was intentional, even if it is discriminatory. See: Datta, A., Tschantz, M. C., & Datta, A. (2014). Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. https://arxiv.org/pdf/1408.6491

11. Tsui, Q. (2023, September 26). Dethroning the all-powerful AI: Developing ethics for a demystified AI. Bot Populi. https://botpopuli.net/dethroning-the-all-powerful-ai-developing-ethics-for-a-demystified-ai/

12. Amrute, S., Singh, R., and Guzmán, L.R. (2022). A Primer on AI in/from the Majority World: An Empirical Site and a Standpoint. Data & Society http://dx.doi.org/10.2139/ssrn.4199467  

13. Hassan, Y. (2023). Governing algorithms from the South: a case study of AI development in Africa. AI & Society, 38(4), 1429-1442.

14. Larsen, B. C. (2022). The geopolitics of AI and the rise of digital sovereignty. Brookings. https://www.brookings.edu/articles/the-geopolitics-of-ai-and-the-rise-of-digital-sovereignty/

15. See Schmidt, E. (2022). AI, great power competition & national security. Daedalus 2022; 151 (2): 288–298. https://doi.org/10.1162/daed_a_01916; Also see the transcript of this keynote address by Eric Schmidt.

16. Maham, P. & Küspert. S. (2023). Governing General Purpose AI. A Comprehensive Map of Unreliability, Misuse and SystemicRisks. Stiftung Neue Verantwortung.https://www.stiftung-nv.de/de/publikation/governing-general-purpose-ai-comprehensive-map-unreliability-misuse-and-systemic-risks

17. This lopsidedness is illustrated in the case of experienced nurses having to disregard their professional expertise in favor of an ultimately erroneous algorithmic assessment of a patient’s diagnosis, as the penalties for not complying with the AI are prohibitive. See: Bannon, L. (2023, June 15). When AI overrules the nurses caring for you. Wall Street Journal. https://www.wsj.com/articles/ai-medical-diagnosis-nurses-f881b0fe

18. Henriques-Gomes, L. (2023, March 10). Robodebt: five years of lies, mistakes and failures that caused a $1.8bn scandal. The Guardian. https://www.theguardian.com/australia-news/2023/mar/11/robodebt-five-years-of-lies-mistakes-and-failures-that-caused-a-18bn-scandal

19. Hsu, T., & Thompson, S.A. (2023, June 20). Disinformation researchers raise alarms about A.I. chatbots. New York Times. https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html

20. See Widder, D. G., West, S., & Whittaker, M. (2023). Open (for business): Big Tech, concentrated power, and the political economy of open AI. SSRN. http://dx.doi.org/10.2139/ssrn.4543807. Also see: Solaiman, I. (2023). The gradient of generative AI release: Methods and considerations. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 111-122). https://dl.acm.org/doi/pdf/10.1145/3593013.3593981 for a detailed analysis of how the term ‘open’ is used in variations to encompass everything from the most minimalist of models to fully FOSS initiatives.

21. Vipra, J., & Korinek, A. (2023). Market Concentration Implications of Foundation Models: The Invisible Hand of ChatGPT. Center on Regulation and Markets Working Paper #9. Brookings. https://www.brookings.edu/series/center-on-regulation-and-markets-working-papers/

22. Norwegian Consumer Council. (2023). Ghost in the machine – Addressing the consumer harms of generative AI. Norwegian Consumer Council. https://storage02.forbrukerradet.no/media/2023/06/generative-ai-rapport-2023.pdf;

23. Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI less” thirsty”: Uncovering and addressing the secret water footprint of AI models. arXiv preprint . https://arxiv.org/pdf/2304.03271

24. The startup, Hugging Face, trained its LLM, Bloom on a French supercomputer powered by nuclear energy, producing a lower emissions footprint than most other models of similar size. But once training was completed, in the pre-deployment stage, BLOOM emitted a carbon footprint equivalent to that of 60 flights between London and Paris. See: Heikkilä, M. (2022, November 14). We’re getting a better idea of AI’s true carbon footprint. MIT Tech Review. https://www.technologyreview.com/2022/11/14/1063192/were-getting-a-better-idea-of-ais-true-carbon-footprint/; Also see this climate spotlight from the 2023 Landscape Report on Confronting Tech Power from AI Now.

25. Initiate: Digital Rights in Society. (2022). Beyond the North-South Fork on the Road to AI Governance: An Action Plan for Democratic & Distributive Integrity. Initiate: Digital Rights in Society, Paris Peace Forum. https://parispeaceforum.org/en/initiate-ppf-global-south-ai-report-en-2/

26. Sinha, A. (2022). Navigating transparency in EU’s Artificial Intelligence Act: a policy proposal. Knowing without Seeing. https://www.knowingwithoutseeing.com/essays/ai-act-policy-proposal

27. These thresholds can be appropriately tiered to ensure that models with higher impact have greater requirements to fulfill.

28. Human Rights Watch. (2023, December 07). EU: Artificial Intelligence regulation should protect people’s rights. Human Rights Watch. https://www.hrw.org/news/2023/07/12/eu-artificial-intelligence-regulation-should-protect-peoples-rights

29. Lutz, F. (2023). Contribution on human rights, discrimination, and the regulation of AI with a special focus on gender equality for the Global Digital Compact. https://www.un.org/techenvoy/sites/www.un.org.techenvoy/files/GDC-submission_Fabian-Luetz.pdf

30. Bias and discriminatory decisions are largely driven by the datasets used for training and feeding the algorithm. Historical data gaps, biases, and stereotypes translate into unrepresentative and non-diverse datasets. See:

Leavy, S., Meaney, G., Wade, K., & Greene, D. (2020). Mitigating gender bias in machine learning data sets. In Bias and Social Aspects in Search and Recommendation: First International Workshop, BIAS 2020, Lisbon, Portugal, April 14, Proceedings 1 (pp. 12-26).. https://arxiv.org/pdf/2005.06898;

Marwala, T.,Elournier, F. & Serge Stinckwich, T. The Use of Synthetic Data to Train AI Models: Opportunities and Risks for Sustainable Development. UNU Centre, UNU-CPR, UNU Macau. https://arxiv.org/pdf/2309.00652.

See also this project by FINDHR, which is developing tools that reveal discrimination in job selection processes and create methods to avoid such discrimination: https://cordis.europa.eu/project/id/101070212

31. The intersectional nature of marginalization and social groupings and its impact on bias has been overlooked in techno-design interventions that seek to counter implicit bias in AI models. Lower bound constraints – i.e. the introduction of mandatory parameters for the algorithm to run – with intersectional data (eg. race and gender or gender and caste) could be a solution towards addressing this. See: Mehrotra, A., Pradelski, B. S., & Vishnoi, N. K. (2022). Selection in the presence of implicit bias: the advantage of intersectional constraints. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 599-609). https://dl.acm.org/doi/pdf/10.1145/3531146.3533124

32. NIC.Br. (2022). Artificial Intelligence and Culture: Perspectives for Cultural Diversity in the Digital Age. Brazilian Network Information Center, Brazilian Internet Steering Committee. https://cetic.br/media/docs/publicacoes/7/20221111151258/sectoral_studies-artificial_intelligence_and_culture.pdf

33. Slusallek, P. (2018, January 08). Artificial Intelligence and digital reality: Do we need a CERN for AI?. OECD Forum. https://www.oecd-forum.org/posts/28452-artificial-intelligence-and-digital-reality-do-we-need-a-cern-for-ai

34. ‘Fair machine learning’ as a principle may be useful to consider here. In general, AI models benefit from having more data inputs and function better as a result of the same. But the intent of AI systems that funnel data into their training models must be considered when applying this maxim. Safeguards must protect against the building of free-riding value propositions (eg. scanning imagery of actors and using the same in AI deep fake videos without compensating the former, or when a GenAI text model seeks to substitute the works of the very creators that it has studied and learned from) See: Lemley, M. A., & Casey, B. (2020). Fair learning. Tex. L. Rev., 99, 743. https://texaslawreview.org/fair-learning/

35. The Authors’ Guild’s collective licensing proposal seems useful in this regard. This proposal says: “The Authors’ Guild proposes to create a collective license whereby a collective management organization (CMO) would license out rights on behalf of authors, negotiate fees with the AI companies, and then distribute the payment to authors who register with the CMO. These licenses could cover past uses of books, articles, and other works in AI systems, as well as future uses. The latter would not be licensed without a specific opt-in from the author or other rights holders”.

36. Marcus, J. S., Martens, B., & Carugati, C. (2022). The European Health Data Space. European Parliament Policy Department studies. https://afyonluoglu.org/PublicWebFiles/Reports/PDP/international/2022-12%20EU%20ITRE-The%20European%20Health%20Data%20Space.pdf

No Comments

Sorry, the comment form is closed at this time.