ΑΙhub.org
 

Watch the talks from the ACM Conference on Fairness, Accountability, and Transparency

by
18 August 2022



share this:
FAcct tiger logo

The ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) took place in Seoul, South Korea from 21-24 June 2022. The event brought together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.

All of the keynote talks, panel discussions, tutorials, and research talks are available to watch on YouTube. There are playlists for each:

There were four distinguished paper awards presented at the conference. You can see the associated talks below:


The values encoded in machine learning research
Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan and Michelle Bao


Fairness-aware model-agnostic positive and unlabeled learning
Ziwei Wu and Jingrui He


Algorithmic tools in public employment services: towards a jobseeker-centric perspective
Kristen Scott, Sonja Mei Wang, Milagros Miceli, Pieter Delobelle, Karolina Sztandar-Sztanderska and Bettina Berendt


Towards intersectional feminist and participatory ML: a case study in supporting feminicide counterdata collection
H. Suresh, R. Movva, A. Lee Dogan, R. Bhargava, I. Cruxen, A. Martinez Cuba, G. Taurino, W. So, C. D’Ignazio





Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



The Turing Lectures: Can we trust AI? – with Abeba Birhane

Abeba covers biases in data, the downstream impact on AI systems and our daily lives, how researchers are tackling the problem, and more.
21 November 2024, by

Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association