ΑΙhub.org
 

Tutorial on fairness, accountability, transparency and ethics in computer vision

by
14 July 2020



share this:

CVPR FATE
The Computer Vision and Pattern Recognition conference (CVPR) was held virtually on 14-19 June. As well as invited talks, posters and workshops, there were a number of tutorials on a range of topics. Timnit Gebru and Emily Denton were the organisers of one of the tutorials, which covered fairness, accountability, transparency and ethics in computer vision.

As the organisers write in the introduction to their tutorial, computer vision is no longer a purely academic endeavour; computer vision systems have been utilised widely across society. Such systems have been applied to law enforcement, border control, employment and healthcare.

Seminal works, such as the Gender Shades project (read the paper here), and organisations campaigning for equitable and accountable AI systems, such as The Algorithmic Justice League, have been instrumental in encouraging a rethink from some big tech companies regarding facial recognition systems, with Amazon, Microsoft and IBM all announcing that they would (for the time being) stop selling the technology to police forces.

This tutorial helps lay the foundations for community discussions about the ethical considerations of some of the current use cases of computer vision technology. The presentations also seek to highlight research which focusses on uncovering and mitigating issues of bias and historical discrimination.

The tutorial comprises three parts, to be watched in order.

Part 1: Computer vision in practice: who is benefiting and who is being harmed?

Speaker: Timnit Gebru

Part 2: Data ethics

Speakers: Timnit Gebru and Emily Denton

Part 3: Towards more socially responsible and ethics-informed research practices

Speaker: Emily Denton

Following the tutorial there was a panel discussion, moderated by Angjoo Kanazawa, which you can watch below.




AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



No free lunch in LLM watermarking: Trade-offs in watermarking design choices

Common design choices in LLM watermarking schemes make the resulting systems surprisingly susceptible to watermark removal or spoofing attacks.
23 October 2024, by

Tweet round-up from the first few days of #ECAI2024

We take a look at what participants have been getting up to over the first few days of the event.
22 October 2024, by

The Good Robot Hot Take: does AI know how you feel?

In this episode, Eleanor and Kerry chat about why AI tools that promise to be able to read our emotions from our faces are scientifically and politically suspect.
21 October 2024, by

#AIES2024 conference schedule

Find out what's on the programme at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society.
19 October 2024, by

#IROS2024 – tweet round-up

We take a look at what the participants got up to at IROS 2024.
18 October 2024, by

What’s on the programme at #ECAI2024?

Find out what the 27th European Conference on Artificial Intelligence has in store.
17 October 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association