ΑΙhub.org
 

AI transparency in practice: a report

by
22 March 2023



share this:

Abstract microscopic photography of a Graphics Processing Unit resembling a satellite image of a big cityFritzchens Fritz / Better Images of AI / GPU shot etched 5 / Licenced by CC-BY 4.0

A report, co-authored by Ramak Molavi Vasse’i (Mozilla’s Insights Team) and Jesse McCrosky (Thoughtworks), investigates the issue of AI transparency. The pair dig into what AI transparency actually means, and aim to provide useful and actionable information for specific stakeholders. The report also details a survey of current approaches, assesses their limitations, and outlines how meaningful transparency might be achieved.

The authors have highlighted the following key findings from their report:

  • The focus of builders is primarily on system accuracy and debugging, rather than helping end users and impacted people understand algorithmic decisions.
  • AI transparency is rarely prioritized by the leadership of respondents’ organizations, partly due to a lack of pressure to comply with the legislation.
  • While there is active research around AI explainability (XAI) tools, there are fewer examples of effective deployment and use of such tools, and little confidence in their effectiveness.
  • Apart from information on data bias, there is little work on sharing information on system design, metrics, or wider impacts on individuals and society. Builders generally do not employ criteria established for social and environmental transparency, nor do they consider unintended consequences.
  • Providing appropriate explanations to various stakeholders poses a challenge for developers. There is a noticeable discrepancy between the information survey respondents currently provide and the information they would find useful and recommend.

Topics covered in the report include:

  • Meaningful AI transparency
  • Transparency stakeholders and their needs
  • Motivations and priorities of builders around AI transparency
  • Transparency tools and methods
  • Awareness of social and ecological impact
  • Transparency delivery – practices and recommendations
  • Ranking challenges for greater AI transparency

You can read the report in full here. A PDF version is here.




Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



AIhub coffee corner: Is it the end of GenAI hype?

The AIhub coffee corner captures the musings of AI experts over a short conversation.
08 October 2024, by

ChatGPT is changing the way we write. Here’s how – and why it’s a problem

Have you noticed certain words and phrases popping up everywhere lately?
07 October 2024, by

Will humans accept robots that can lie? Scientists find it depends on the lie

Humans don’t just lie to deceive: sometimes we lie to avoid hurting others, breaking one social norm to uphold another.
04 October 2024, by

Explainable AI for detecting and monitoring infrastructure defects

A team of researchers has demonstrated the feasibility of an AI-driven method for crack detection, growth and monitoring.
03 October 2024, by

The Good Robot podcast: the EU AI Act part 2, with Amba Kak and Sarah Myers West from AI NOW

In the second instalment of their EU AI Act series, Eleanor and Kerry talk to Amba Kak and Sarah Myers West
02 October 2024, by

Forthcoming machine learning and AI seminars: October 2024 edition

A list of free-to-attend AI-related seminars that are scheduled to take place between 1 October and 30 November 2024.
01 October 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association