ΑΙhub.org
 

Interview with Teresa Salazar: Developing fair federated learning algorithms


by
01 November 2022



share this:
Teresa Salazar photo

In their paper FAIR-FATE: Fair Federated Learning with Momentum, Teresa Salazar, Miguel Fernandes, Helder Araujo, and Pedro Henriques Abreu develop a fairness-aware federated learning algorithm which aims to achieve group fairness while maintaining classification performance. Here, Teresa tells us more about their work.

What is the topic of the research in your paper?

With the widespread use of machine learning algorithms to make decisions which impact people’s lives, the area of fairness-aware machine learning has been receiving increasing attention. Fairness-aware machine learning algorithms ensure that predictions do not prejudice unprivileged groups of the population with respect to sensitive attributes such as race or gender. However, the focus has been on centralized machine learning, with decentralized methods receiving little attention.

Federated learning is an emerging decentralized technology that creates machine learning models using data distributed across multiple clients. In federated learning, the training process is divided into multiple clients which train individual local models on local datasets, without the need to share private data. At each communication round, each client shares its local model updates with the server that uses them to create a global shared model.

In this work we developed FAIR-FATE: a novel fairness-aware federated learning algorithm which aims to achieve group fairness while maintaining classification performance. To the best of our knowledge, this is the first approach in machine learning that aims to achieve fairness using a fair Momentum estimate. In addition, FAIR-FATE is a flexible algorithm that does not rely on any fairness local strategy at each client.

Could you tell us about the implications of your research and why it is an interesting area for study?

Developing fair federated learning algorithms is of utmost importance. Because of its decentralized nature, there are several reasons why federated learning algorithms may contribute to potential biased decisions.

Firstly, since in federated learning each client has its own private dataset, it may not be representative of the global distribution and thus can lead to the introduction or exacerbation of bias in the global model. Secondly, typical centralized fair machine learning solutions require centralized access to the sensitive attributes information, and, consequently, cannot be directly applied to federated learning without violating privacy constraints. Finally, the aggregation algorithms used in the server may introduce bias by, for example, giving higher importance to models of clients which contain more data, amplifying effects of under-representing unprivileged groups in a dataset.

Because of these reasons, finding federated learning algorithms which facilitate collaboration amongst clients to build fair machine learning models while preserving their data privacy is a great challenge.

FAIR-FATE schematicOutline of the FAIR-FATE algorithm.

Could you explain your methodology?

FAIR-FATE uses a new fair aggregation method that computes the global model by taking into account the fairness of the clients. In order to achieve this, the server has a validation set that is used at each communication round to measure the fairness of the clients and the current global model. Afterwards, the fair Momentum update is computed using a fraction of the previous fair update, as well as the average of the clients’ updates that have higher fairness than the current global model, giving higher importance to fairer models. Since Momentum-based solutions have been used to help overcome oscillations of noisy gradients, we hypothesise that the use of a fairness-aware Momentum term can overcome the oscillations of non-fair gradients.

What were your main findings?

The proposed approach is evaluated on four real-world datasets which are commonly used in the literature for fairness-related studies: the COMPAS, the Adult, the Law School and the Dutch Census datasets. Several experiments are conducted on a range of non-identical data distributions at each client, considering the different sensitive attributes and target classes in each dataset.

Experimental results on the described datasets demonstrate that FAIR-FATE significantly outperforms state-of-the-art fair federated learning algorithms on fairness without neglecting model performance under different data distributions.

Revisiting the question presented in this work: Can a fairness-aware Momentum term be used to overcome the oscillations of noisy non-fair gradients in federated learning?

We conclude that applying momentum techniques can help in achieving group fairness in federated learning.

What further work are you planning in this area?

Interesting directions for future work are: testing FAIR-FATE using multiple fairness metrics and multiple sensitive attributes simultaneously; extending the work to various application scenarios such as regression or clustering; focusing on individual fairness notions.

We hope that this work inspires the community to develop more fairness-aware algorithms, in particular, in federated learning, and consider Momentum techniques for addressing bias in machine learning.

Read the research in full

FAIR-FATE: Fair Federated Learning with Momentum, Teresa Salazar, Miguel Fernandes, Helder Araujo, Pedro Henriques Abreu.

About Teresa

Teresa Salazar photo

Teresa Salazar received a B.S. degree in Informatics Engineering from the University of Coimbra in 2018 and a M.S. degree in Informatics from the University of Edinburgh in 2019 with specialisation in machine learning and natural language processing.

Since 2020, she is attending the Doctoral Program in Information Science and Technology at the University of Coimbra, being a member of the Centre for Informatics and Systems of the University of Coimbra. Her primary research interests are mainly focused on topics in fairness and imbalanced data, natural language processing, and information retrieval.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :

Learning to see the physical world: an interview with Jiajun Wu

and   17 Feb 2026
Winner of the 2019 AAAI / ACM SIGAI dissertation award tells us about his current research.

3 Questions: Using AI to help Olympic skaters land a quint

  16 Feb 2026
Researchers are applying AI technologies to help figure skaters improve. They also have thoughts on whether five-rotation jumps are humanly possible.

AAAI presidential panel – AI and sustainability

  13 Feb 2026
Watch the next discussion based on sustainability, one of the topics covered in the AAAI Future of AI Research report.

How can robots acquire skills through interactions with the physical world? An interview with Jiaheng Hu

  12 Feb 2026
Find out more about work published at the Conference on Robot Learning (CoRL).

From Visual Question Answering to multimodal learning: an interview with Aishwarya Agrawal

and   11 Feb 2026
We hear from Aishwarya about research that received a 2019 AAAI / ACM SIGAI Doctoral Dissertation Award honourable mention.

Governing the rise of interactive AI will require behavioral insights

  10 Feb 2026
Yulu Pi writes about her work that was presented at the conference on AI, ethics and society (AIES 2025).

AI is coming to Olympic judging: what makes it a game changer?

  09 Feb 2026
Research suggests that trust, legitimacy, and cultural values may matter just as much as technical accuracy.

Sven Koenig wins the 2026 ACM/SIGAI Autonomous Agents Research Award

  06 Feb 2026
Sven honoured for his work on AI planning and search.


AIhub is supported by:







 













©2026.01 - Association for the Understanding of Artificial Intelligence