Enabling a rights-based AI?
58502
post-template-default,single,single-post,postid-58502,single-format-standard,bridge-core-3.1.8,qodef-qi--no-touch,qi-addons-for-elementor-1.7.2,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-child-theme-ver-1.0.0,qode-theme-ver-30.5,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-7.6,vc_responsive,elementor-default,elementor-kit-41156

Enabling a rights-based AI?

The latest report to warn about the societal risks of artificial Intelligence (AI) documents extensive research of human rights online.

Freedom on the Net 2023: The Repressive Power of Artificial Intelligence assesses internet freedom in 70 countries, accounting for almost 90 percent of global internet users.

Their headline findings are depressing but perhaps not surprising:

  1. Global internet freedom declined for the 13th consecutive year.
  2. Attacks on free expression grew more common around the world.
  3. Generative artificial intelligence (AI) threatens to supercharge online disinformation campaigns.
  4. AI has allowed governments to enhance and refine their online censorship.

 

What provides constructive hope, however, are the report’s assessment of regulatory models (or lack thereof) and suggestion of a road map, largely built on recent steps taken by the European Union (EU) and United States.

The EU’s General Data Protection Regulations are a key foundation for the draft Artificial Intelligence Act, which “would tailor obligations based on the level of risk associated with particular technologies.”

The US effort, “The Blueprint for and AI Bill of Rights”, focuses on principles to AI design, use, and deployment but to date is dependent on voluntary commitments by companies.

The report notes that business decisions over the research period alone, notably by new owner of X (formerly Twitter), cast doubt on the effectiveness of self-regulation and make the case for responsible oversight more acute.

The report’s data and recommendations highlight the continued need for government, companies, and civil society to work together with a rights-based approach that first and foremost protects the freedom of expression and access to information. Regulation should be based on human rights, transparency, and independent oversight.

The recommendations also address the need to defend “information integrity” but admit that this requires a long-term solution that prioritizes independent media and educated communities:

A whole-of-society approach to fostering a diverse and reliable information space entails supporting independent online media and empowering ordinary people with the tools they need to identify false or misleading information. Civic education initiatives and digital literacy training can help people navigate complex media environments. Governments should also allocate funding to develop detection tools for AI-generated content, which will only become more important as these tools grow more sophisticated and more widely used. Finally, democracies should scale up efforts to support independent online media through financial assistance and innovative financing models, technical support, and professional development support.

Reading through the report data and recommendation, though, one wonders what would ever convince authoritarian governments (which seem to be increasing in number) and globally powerful tech companies to cede their power to control. And certainly, viewing the global reality is overwhelming.

But the power of democratic advocates and activists can be seen at local and national levels – and these are the real building blocks of change. As the report itself notes with several examples, “Digital activism and civil society advocacy drove real-world improvements for human rights during the coverage period.”

These are the positive cases that we need to work together to multiply.

Photo: Dmitry Demidovich

No Comments

Sorry, the comment form is closed at this time.