ΑΙhub.org
 

AI transparency in practice: a report


by
22 March 2023



share this:

Abstract microscopic photography of a Graphics Processing Unit resembling a satellite image of a big cityFritzchens Fritz / Better Images of AI / GPU shot etched 5 / Licenced by CC-BY 4.0

A report, co-authored by Ramak Molavi Vasse’i (Mozilla’s Insights Team) and Jesse McCrosky (Thoughtworks), investigates the issue of AI transparency. The pair dig into what AI transparency actually means, and aim to provide useful and actionable information for specific stakeholders. The report also details a survey of current approaches, assesses their limitations, and outlines how meaningful transparency might be achieved.

The authors have highlighted the following key findings from their report:

  • The focus of builders is primarily on system accuracy and debugging, rather than helping end users and impacted people understand algorithmic decisions.
  • AI transparency is rarely prioritized by the leadership of respondents’ organizations, partly due to a lack of pressure to comply with the legislation.
  • While there is active research around AI explainability (XAI) tools, there are fewer examples of effective deployment and use of such tools, and little confidence in their effectiveness.
  • Apart from information on data bias, there is little work on sharing information on system design, metrics, or wider impacts on individuals and society. Builders generally do not employ criteria established for social and environmental transparency, nor do they consider unintended consequences.
  • Providing appropriate explanations to various stakeholders poses a challenge for developers. There is a noticeable discrepancy between the information survey respondents currently provide and the information they would find useful and recommend.

Topics covered in the report include:

  • Meaningful AI transparency
  • Transparency stakeholders and their needs
  • Motivations and priorities of builders around AI transparency
  • Transparency tools and methods
  • Awareness of social and ecological impact
  • Transparency delivery – practices and recommendations
  • Ranking challenges for greater AI transparency

You can read the report in full here. A PDF version is here.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence