Congratulations to the winners of the FAccT2022 distinguished paper awards!

29 June 2022

share this:

The ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) took place in Seoul, South Korea from 21-24 June 2022. The event brought together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.

At the conference, the FAccT 2022 distinguished paper awards were announced. There were two distinguished paper awards, and two student distinguished paper awards.

Distinguished paper awards

The values encoded in machine learning research
Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan and Michelle Bao

Machine learning (ML) currently exerts an outsized influence on the world, increasingly affecting communities and institutional practices. It is therefore critical that we question vague conceptions of the field as value-neutral or universally beneficial, and investigate what specific values the field is advancing. In this paper, we present a rigorous examination of the values of the field by quantitatively and qualitatively analyzing 100 highly cited ML papers published at premier ML conferences, ICML and NeurIPS. We annotate key features of papers which reveal their values: how they justify their choice of project, which aspects they uplift, their consideration of potential negative consequences, and their institutional affiliations and funding sources. We find that societal needs are typically very loosely connected to the choice of project, if mentioned at all, and that consideration of negative consequences is extremely rare. We identify 67 values that are uplifted in machine learning research, and, of these, we find that papers most frequently justify and assess themselves based on performance, generalization, efficiency, researcher understanding, novelty, and building on previous work. We present extensive textual evidence and analysis of how these values are operationalized. Notably, we find that each of these top values is currently being defined and applied with assumptions and implications generally supporting the centralization of power. Finally, we find increasingly close ties between these highly cited papers and tech companies and elite universities.

Read the paper in full here.

Fairness-aware model-agnostic positive and unlabeled learning
Ziwei Wu and Jingrui He

With the increasing application of machine learning in high-stake decision-making problems, potential algorithmic bias towards people from certain social groups poses negative impacts on individuals and our society at large. In the real-world scenario, many such problems involve positive and unlabeled data such as medical diagnosis, criminal risk assessment and recommender systems. For instance, in medical diagnosis, only the diagnosed diseases will be recorded (positive) while others will not (unlabeled). Despite the large amount of existing work on fairness-aware machine learning in the (semi-)supervised and unsupervised settings, the fairness issue is largely under-explored in the aforementioned Positive and Unlabeled Learning (PUL) context, where it is usually more severe. In this paper, to alleviate this tension, we propose a fairness-aware PUL method named FAIRPUL. In particular, for binary classification over individuals from two populations, we aim to achieve similar true positive rates and false positive rates in both populations as our fairness metric. Based on the analysis of the optimal fair classifier for PUL, we design a model-agnostic post-processing framework, leveraging both the positive examples and unlabeled ones. Our framework is proven to be statistically consistent in terms of both the classification error and the fairness metric. Experiments on the synthetic and real-world data sets demonstrate that our framework outperforms state-of-the-art in both PUL and fair classification.

Read the paper in full here.

Distinguished student paper awards

Algorithmic tools in public employment services: towards a jobseeker-centric perspective
Kristen Scott, Sonja Mei Wang, Milagros Miceli, Pieter Delobelle, Karolina Sztandar-Sztanderska and Bettina Berendt

Algorithmic and data-driven systems have been introduced to assist Public Employment Services (PES) in government agencies throughout the world. Their deployment has sparked public controversy and some of these systems have been removed from use or seen their roles significantly reduced as a consequence. Yet the implementation of such systems continues. In this paper, we use a participatory approach to determine a course forward for research and development in this area. Our investigation comprises two workshops: one fact-finding workshop with academics, system developers, the public sector, and civil-society organizations, the second a co-design workshop held with 13 unemployed migrants to Germany. Based on the discussion in the fact-finding workshop we identified challenges of existing PES (algorithmic) systems. From the co-design workshop we identified jobseekers’ desiderata when contacting PES, namely the need for human contact, the expectation to receive genuine orientation, and the desire to be seen as a whole human being. We map these expectations to three design considerations for algorithmic systems for PCS, i.e., the importance of interpersonal interaction, jobseeker assessment as direction, and the challenge of mitigating misrepresentation. Finally, we argue that the limitations and risks of current systems cannot be addressed through minor adjustments but they rather require a more fundamental change to the role of PES.

Read the paper in full here.

Towards intersectional feminist and participatory ML: a case study in supporting feminicide counterdata collection
H. Suresh, R. Movva, A. Lee Dogan, R. Bhargava, I. Cruxen, A. Martinez Cuba, G. Taurino, W. So, C. D’Ignazio

Data ethics and fairness have emerged as important areas of research in recent years. However, much of the work in this area focuses on retroactively auditing and “mitigating bias” in existing, potentially flawed systems, without interrogating the deeper structural inequalities underlying them. There are not yet examples of how to apply feminist and participatory methodolgies from the start, to conceptualize and design machine learning-based tools that center and aim to challenge power inequalities. Our work targets this more prospective goal. Guided by the framework of Data Feminism, we co-design datasets and machine learning models to support the efforts of activists who collect and monitor data about feminicide – gender-based killings of women and girls. We describe how intersectional feminist goals and participatory processes shaped each stage of our approach, from problem conceptualization to data collection to model evaluation. We highlight several methodological contributions, including 1) an iterative data collection and annotation process that targets model weaknesses and interrogates framing concepts (such as who is included/excluded in “feminicide”), 2) models that explicitly focus on intersectional identities, rather than statistical majorities, and 3) a multi-step evaluation process—with quantitative, qualitative and participatory steps—focused on context-specific relevance. We also distill more general insights and tensions that arise from bridging intersectional feminist goals with ML. These include reflections on how ML may challenge power, embrace pluralism, rethink binaries and consider context, as well as the inherent limitations of any technology-based solution to address durable structural inequalities.

Read the paper in full here.

AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.

            AIhub is supported by:

Related posts :

AIhub monthly digest: November 2023 – deconstructing sentiment analysis, few-shot learning for medical images, and Angry Birds structure generation

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
29 November 2023, by

An introduction to science communication at #NeurIPS2023

Find out more about our short course to be held in-person at NeurIPS on Monday 11 December.
28 November 2023, by

Co-creating better images of AI

In July, 2023, Science Gallery London and the London Office of Technology and Innovation co-hosted a workshop helping Londoners think about the kind of AI they want.
27 November 2023, by

The power of collaboration: power grid control with multi-agent reinforcement learning

A promising AI tool for assisting network operators in their real-time decision-making and operations

Goal representations for instruction following

How can we reconcile the ease of specifying tasks through natural language-based approaches with the performance improvements of goal-conditioned learning?
23 November 2023, by

A comprehensive survey on rare event prediction

We review the rare event prediction literature and highlight open research questions and future directions in the field.

©2021 - Association for the Understanding of Artificial Intelligence


©2021 - ROBOTS Association