ΑΙhub.org
 

AI transparency in practice: a report


by
22 March 2023



share this:

Abstract microscopic photography of a Graphics Processing Unit resembling a satellite image of a big cityFritzchens Fritz / Better Images of AI / GPU shot etched 5 / Licenced by CC-BY 4.0

A report, co-authored by Ramak Molavi Vasse’i (Mozilla’s Insights Team) and Jesse McCrosky (Thoughtworks), investigates the issue of AI transparency. The pair dig into what AI transparency actually means, and aim to provide useful and actionable information for specific stakeholders. The report also details a survey of current approaches, assesses their limitations, and outlines how meaningful transparency might be achieved.

The authors have highlighted the following key findings from their report:

  • The focus of builders is primarily on system accuracy and debugging, rather than helping end users and impacted people understand algorithmic decisions.
  • AI transparency is rarely prioritized by the leadership of respondents’ organizations, partly due to a lack of pressure to comply with the legislation.
  • While there is active research around AI explainability (XAI) tools, there are fewer examples of effective deployment and use of such tools, and little confidence in their effectiveness.
  • Apart from information on data bias, there is little work on sharing information on system design, metrics, or wider impacts on individuals and society. Builders generally do not employ criteria established for social and environmental transparency, nor do they consider unintended consequences.
  • Providing appropriate explanations to various stakeholders poses a challenge for developers. There is a noticeable discrepancy between the information survey respondents currently provide and the information they would find useful and recommend.

Topics covered in the report include:

  • Meaningful AI transparency
  • Transparency stakeholders and their needs
  • Motivations and priorities of builders around AI transparency
  • Transparency tools and methods
  • Awareness of social and ecological impact
  • Transparency delivery – practices and recommendations
  • Ranking challenges for greater AI transparency

You can read the report in full here. A PDF version is here.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Interview with Xinwei Song: strategic interactions in networked multi-agent systems

  16 Apr 2026
Xinwei Song tells us about her research using algorithmic game theory and multi-agent reinforcement learning.

2026 AI Index Report released

  15 Apr 2026
Find out what the ninth edition of the report, which was published on 13 April, says about trends in AI.

Formal verification for safety evaluation of autonomous vehicles: an interview with Abdelrahman Sayed Sayed

  14 Apr 2026
Find out more about work at the intersection of continuous AI models, formal methods, and autonomous systems.

Water flow in prairie watersheds is increasingly unpredictable — but AI could help

  13 Apr 2026
In recent years, the Prairies have seen bigger swings in climate conditions — very wet years followed by very dry ones.

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.

Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti

  08 Apr 2026
Francesco tells us how LLMs behave in the social network Moltbook, and what this reveals about network dynamics.

Scaling up multi-agent systems: an interview with Minghong Geng

  07 Apr 2026
We sat down with Minghong in the latest of our interviews with the 2026 AAAI/SIGAI Doctoral Consortium participants.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence