ΑΙhub.org
 

AI transparency in practice: a report

by
22 March 2023



share this:

Abstract microscopic photography of a Graphics Processing Unit resembling a satellite image of a big cityFritzchens Fritz / Better Images of AI / GPU shot etched 5 / Licenced by CC-BY 4.0

A report, co-authored by Ramak Molavi Vasse’i (Mozilla’s Insights Team) and Jesse McCrosky (Thoughtworks), investigates the issue of AI transparency. The pair dig into what AI transparency actually means, and aim to provide useful and actionable information for specific stakeholders. The report also details a survey of current approaches, assesses their limitations, and outlines how meaningful transparency might be achieved.

The authors have highlighted the following key findings from their report:

  • The focus of builders is primarily on system accuracy and debugging, rather than helping end users and impacted people understand algorithmic decisions.
  • AI transparency is rarely prioritized by the leadership of respondents’ organizations, partly due to a lack of pressure to comply with the legislation.
  • While there is active research around AI explainability (XAI) tools, there are fewer examples of effective deployment and use of such tools, and little confidence in their effectiveness.
  • Apart from information on data bias, there is little work on sharing information on system design, metrics, or wider impacts on individuals and society. Builders generally do not employ criteria established for social and environmental transparency, nor do they consider unintended consequences.
  • Providing appropriate explanations to various stakeholders poses a challenge for developers. There is a noticeable discrepancy between the information survey respondents currently provide and the information they would find useful and recommend.

Topics covered in the report include:

  • Meaningful AI transparency
  • Transparency stakeholders and their needs
  • Motivations and priorities of builders around AI transparency
  • Transparency tools and methods
  • Awareness of social and ecological impact
  • Transparency delivery – practices and recommendations
  • Ranking challenges for greater AI transparency

You can read the report in full here. A PDF version is here.




Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



AIhub monthly digest: November 2023 – deconstructing sentiment analysis, few-shot learning for medical images, and Angry Birds structure generation

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
29 November 2023, by

An introduction to science communication at #NeurIPS2023

Find out more about our short course to be held in-person at NeurIPS on Monday 11 December.
28 November 2023, by

Co-creating better images of AI

In July, 2023, Science Gallery London and the London Office of Technology and Innovation co-hosted a workshop helping Londoners think about the kind of AI they want.
27 November 2023, by

The power of collaboration: power grid control with multi-agent reinforcement learning

A promising AI tool for assisting network operators in their real-time decision-making and operations

Goal representations for instruction following

How can we reconcile the ease of specifying tasks through natural language-based approaches with the performance improvements of goal-conditioned learning?
23 November 2023, by

A comprehensive survey on rare event prediction

We review the rare event prediction literature and highlight open research questions and future directions in the field.





©2021 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association