ΑΙhub.org
 

Tutorial on fairness, accountability, transparency and ethics in computer vision


by
14 July 2020



share this:

CVPR FATE
The Computer Vision and Pattern Recognition conference (CVPR) was held virtually on 14-19 June. As well as invited talks, posters and workshops, there were a number of tutorials on a range of topics. Timnit Gebru and Emily Denton were the organisers of one of the tutorials, which covered fairness, accountability, transparency and ethics in computer vision.

As the organisers write in the introduction to their tutorial, computer vision is no longer a purely academic endeavour; computer vision systems have been utilised widely across society. Such systems have been applied to law enforcement, border control, employment and healthcare.

Seminal works, such as the Gender Shades project (read the paper here), and organisations campaigning for equitable and accountable AI systems, such as The Algorithmic Justice League, have been instrumental in encouraging a rethink from some big tech companies regarding facial recognition systems, with Amazon, Microsoft and IBM all announcing that they would (for the time being) stop selling the technology to police forces.

This tutorial helps lay the foundations for community discussions about the ethical considerations of some of the current use cases of computer vision technology. The presentations also seek to highlight research which focusses on uncovering and mitigating issues of bias and historical discrimination.

The tutorial comprises three parts, to be watched in order.

Part 1: Computer vision in practice: who is benefiting and who is being harmed?

Speaker: Timnit Gebru

Part 2: Data ethics

Speakers: Timnit Gebru and Emily Denton

Part 3: Towards more socially responsible and ethics-informed research practices

Speaker: Emily Denton

Following the tutorial there was a panel discussion, moderated by Angjoo Kanazawa, which you can watch below.




AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:



Related posts :



Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Interview with Mario Mirabile: trust in multi-agent systems

  18 Nov 2025
We meet ECAI Doctoral Consortium participant, Mario, to find out more about his research.

Review of “Exploring metaphors of AI: visualisations, narratives and perception”

and   17 Nov 2025
A curated research session at the Hype Studies Conference, “(Don’t) Believe the Hype?!” 10-12 September 2025, Barcelona.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence