ΑΙhub.org
 

Tutorial on fairness, accountability, transparency and ethics in computer vision


by
14 July 2020



share this:

CVPR FATE
The Computer Vision and Pattern Recognition conference (CVPR) was held virtually on 14-19 June. As well as invited talks, posters and workshops, there were a number of tutorials on a range of topics. Timnit Gebru and Emily Denton were the organisers of one of the tutorials, which covered fairness, accountability, transparency and ethics in computer vision.

As the organisers write in the introduction to their tutorial, computer vision is no longer a purely academic endeavour; computer vision systems have been utilised widely across society. Such systems have been applied to law enforcement, border control, employment and healthcare.

Seminal works, such as the Gender Shades project (read the paper here), and organisations campaigning for equitable and accountable AI systems, such as The Algorithmic Justice League, have been instrumental in encouraging a rethink from some big tech companies regarding facial recognition systems, with Amazon, Microsoft and IBM all announcing that they would (for the time being) stop selling the technology to police forces.

This tutorial helps lay the foundations for community discussions about the ethical considerations of some of the current use cases of computer vision technology. The presentations also seek to highlight research which focusses on uncovering and mitigating issues of bias and historical discrimination.

The tutorial comprises three parts, to be watched in order.

Part 1: Computer vision in practice: who is benefiting and who is being harmed?

Speaker: Timnit Gebru

Part 2: Data ethics

Speakers: Timnit Gebru and Emily Denton

Part 3: Towards more socially responsible and ethics-informed research practices

Speaker: Emily Denton

Following the tutorial there was a panel discussion, moderated by Angjoo Kanazawa, which you can watch below.




AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



Exploring counterfactuals in continuous-action reinforcement learning

  20 Jun 2025
Shuyang Dong writes about her work that will be presented at IJCAI 2025.

What is vibe coding? A computer scientist explains what it means to have AI write computer code − and what risks that can entail

  19 Jun 2025
Until recently, most computer code was written, at least originally, by human beings. But with the advent of GenAI, that has begun to change.

Gearing up for RoboCupJunior: Interview with Ana Patrícia Magalhães

  18 Jun 2025
We hear from the organiser of RoboCupJunior 2025 and find out how the preparations are going for the event.

Interview with Mahammed Kamruzzaman: Understanding and mitigating biases in large language models

  17 Jun 2025
Find out how Mahammed is investigating multiple facets of biases in LLMs.

Google’s SynthID is the latest tool for catching AI-made content. What is AI ‘watermarking’ and does it work?

  16 Jun 2025
Last month, Google announced SynthID Detector, a new tool to detect AI-generated content.

The Good Robot podcast: Symbiosis from bacteria to AI with N. Katherine Hayles

  13 Jun 2025
In this episode, Eleanor and Kerry talk to N. Katherine Hayles about her new book, and discuss how the biological concept of symbiosis can inform the relationships we have with AI.

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

  12 Jun 2025
We caught up with Marco to find out what exciting events are in store at this year's RoboCup.

Graphic novel explains the environmental impact of AI

  11 Jun 2025
EPFL’s Center for Learning Sciences has released Utop’IA, an educational graphic novel that explores the environmental impact of artificial intelligence.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence