ΑΙhub.org
 

Interview with Virginie Do – #AAAI2022 outstanding paper award winner


by
23 March 2022



share this:
Virginie Do

Virginie Do, Sam Corbett-Davies, Jamal Atif and Nicolas Usunier won the AAAI 2022 outstanding paper award for their work Online certification of preference-based fairness for personalized recommender systems. The award was presented at this year’s virtual AAAI Conference on Artificial Intelligence. Here, Virginie Do tells us more about the implications of this research, the methodology, and their main findings.

What is the topic of the research in your paper?

Our paper is about fairness in recommender systems, and more precisely about certifying that recommender systems treat their users fairly.

Could you tell us about the implications of your research and why it is an interesting area for study?

We conducted this research in a context of increased interest in auditing for the fairness of recommender systems. For example, some recent studies observed different delivery rates of ads depending on gender for similar jobs [Imana et al., 2021]. In order to strengthen the conclusions of these audits, it is important to check if differences in recommendations imply a less favorable treatment of some users compared to others, or if they reflect differences in preferences across users. In our work, we propose a fairness criterion that fills this gap by putting user preferences at its core. This criterion, derived from the economic literature on fair resource allocation [Foley, 1967], is called envy-freeness and states that “every user should prefer their recommendations to those of other users”. In other words, this criterion aims to prevent the unfairness of not giving users a better recommendation policy, when one such is given to others.

auditing_scenarioAuditing scenario for recommender systems: at each timestep, the auditor may replace a user’s recommendations with the recommendations that another user would have received in the same context.

Could you explain your methodology?

The challenge of auditing for our fairness criterion is that it requires us to answer the counterfactual question “would user A get higher utility from the recommendations of user B than their own?” This kind of question can be reliably answered through active exploration, by swapping the current recommendation policy for a user with another existing recommendation policy, and estimating the preference of the user through noisy feedback such as “likes”, “shares” or ratings. This kind of exploration is typically done with multi-armed bandits algorithms. We design one such new algorithm specifically for the task of certifying fairness, in which we add the constraint that the exploration process should not deteriorate user satisfaction below a performance baseline.

What were your main findings?

There are two main contributions: the first one concerns the properties of our fairness criterion for recommender systems, and the second one is the new auditing algorithm we propose.

Envy-freeness satisfies several desirable properties:

  • It is in line with giving users their most preferred recommendations, while other fairness criteria for users require to deviate from optimal recommendations.
  • Recommender systems are two-sided markets involving users on one side and content producers on the other side. There is recent interest in designing recommender systems that are also fair towards their content producers, who benefit from the exposure they get on the platform [Singh and Joachims, 2018]. We prove that fairness as envy-freeness for users is compatible with enforcing fairness constraints on the content producer side.

Our other contribution is to design a sample efficient auditing algorithm, in the sense that it requires a reasonable amount of interactions with the user in order to certify envy (or the absence thereof). At the same time, we provide the theoretical guarantee that over the course of the audit, the recommendation performance for the user does not fall too far below a baseline. In practice, our experiments show that when the audited system is unfair to some users, the exploration process of the audit actually improves user satisfaction, instead of deteriorating it.

What further work are you planning in this area?

The auditing method we propose relies on simple modelling assumptions. To improve it, we need to refine our model to account for the dynamics of the recommendation environment, such as complex changes in user behaviour over the course of the audit.

References

  • Do, V.; Corbett-Davies, S.; Atif, J.; and Usunier, N. 2022. Online certification of preference-based fairness for personalized recommender systems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36.
  • Foley, D. K. 1967. Resource allocation and the public sector.
  • Imana, B.; Korolova, A.; Heidemann, J.; and . 2021. Auditing for Discrimination in Algorithms Delivering Job Ads. In Proceedings of the Web Conference 2021, 3767–3778.
  • Singh, A.; and Joachims, T. 2018. Fairness of exposure in rankings. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2219–2228. ACM.

Virginie Do

Virginie Do is a PhD candidate in Computer Science at Université Paris Dauphine–PSL in France, and a resident at Meta AI. Her research is on fairness in machine learning and social choice theory, with a specific focus on ranking and recommender systems, and online algorithms. She holds an MSc and BSc in Applied Mathematics from Ecole Polytechnique, France, and an MSc in Social Data Science from the University of Oxford, UK.




tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Forthcoming machine learning and AI seminars: April 2026 edition

  02 Apr 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 April and 31 May 2026.

#AAAI2026 invited talk: machine learning for particle physics

  01 Apr 2026
How is ML used in the search for new particles at CERN?
monthly digest

AIhub monthly digest: March 2026 – time series, multiplicity, and the history of RoboCup

  31 Mar 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

What I’ve learned from 25 years of automated science, and what the future holds: an interview with Ross King

  30 Mar 2026
We launch our new series with a conversation with Ross King - a pioneer in the field of AI-enabled scientific discovery.

A multi-armed robot for assisting with agricultural tasks

and   27 Mar 2026
How can a robot safely manipulate branches to reveal hidden flowers while remaining aware of interaction forces and minimizing damage?

Resource-constrained image generation and visual understanding: an interview with Aniket Roy

  26 Mar 2026
Aniket tells us about his research exploring how modern generative models can be adapted to operate efficiently while maintaining strong performance.

RWDS Big Questions: how do we highlight the role of statistics in AI?

  25 Mar 2026
Next in our series, the panel explores the statistical underpinning of AI.

A history of RoboCup with Manuela Veloso

  24 Mar 2026
Find out how RoboCup got started and how the competition has evolved, from one of the co-founders.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence