ΑΙhub.org
 

Interview with Lea Demelius: Researching differential privacy


by
25 March 2025



share this:

In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In the latest interview, we hear from Lea Demelius, who is researching differential privacy.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I am studying at the University of Technology Graz in Austria. My research focuses on differential privacy, which is widely regarded as the state-of-the-art for protecting privacy in data analysis and machine learning. I investigate the trade-offs and synergies that arise between various requirements for trustworthy AI – in particular privacy and fairness, with the goal of advancing the adoption of responsible machine learning models and shedding light on the practical and societal implications of integrating differential privacy into real-world systems.

Could you give us an overview of the research you’ve carried out so far during your PhD?

At the beginning of my PhD, I conducted an extensive literature analysis on recent advances of differential privacy in deep learning. After a long and thorough review process, the results are now published at ACM Computing Surveys. I also worked on privacy and security in AI in general, focusing on data protection during computations, which not only includes differential privacy but also other privacy-enhancing technologies such as homomorphic encryption, multi-party computation, and federated learning.

Over the course of my literature analysis, I came across an intriguing line of research that showed that differential privacy has a disparate impact on model accuracy. While the trade-off between privacy and utility – such as overall accuracy – is a well-recognized challenge, these studies show that boosting privacy protections in machine learning models with differential privacy impacts certain sub-groups disproportionately, raising a significant fairness concern. This trade-off between privacy and fairness is far less understood, so I decided to address some open questions I identified. In particular, I examined how the choice of a) metrics and b) hyperparameters affect the trade-off.

Is there an aspect of your research that has been particularly interesting?

I love that my research challenges me to think in a highly technical and logical way – differential privacy, after all, is a rigorous mathematic framework – while also keeping the broader socio-technological context in mind, considering both societal implications of technology and society’s expectations around privacy and fairness.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

I plan to continue exploring the trade-offs between (differential) privacy, utility and fairness. Additionally, I am interested in practical attacks on sensitive data and machine learning models, as they can both inform our decisions on how to balance these trade-offs and reveal new vulnerabilities that require novel solutions. In the long term, I might broaden my research to other aspects of trustworthy AI, such as explainability or robustness, where there are also interesting trade-offs and potential synergies to investigate.

What made you want to study AI?

At first, my motivation was primarily based on fascination with the novel technologies and opportunities AI brought along. I was eager to understand how machine learning models work and how they can be improved. But I quickly realized that for my PhD, that would not be enough for me. I want to work on something that not only fascinates me, but also benefits society. Given the widespread use of AI models today, I believe it is crucial to develop technical solutions that enhance the trustworthiness of AI, such as improving privacy and fairness, but also robustness and explainability. With my research, I aim to contribute to the adoption of machine learning systems that are more aligned with ethical principles and societal needs.

What advice would you give to someone thinking of doing a PhD in the field?

Your advisor and team are key. Make sure that you can work well together: Keeping up in a field as fast paced as AI will be so much easier together. And keep in mind that every PhD journey is different, there are so many influencing factors – both in and out of your control, so avoid comparing your journey too much with that of others.

Could you tell us an interesting (non-AI related) fact about you?

I love singing, especially with others. I founded an a cappella ensemble when I moved to Graz and I am part of the Styrian youth choir. My family also loves to sing whenever we all get together.

About Lea

Lea Demelius is a PhD student at the University of Technology, Graz (Austria) and the Know Center Research GmbH. Her research revolves around responsible AI, with a focus on differential privacy and fairness. She is particularly interested in the trade-offs and synergies that arise between various requirements for trustworthy AI, as well as the practical and societal implications of integrating differential privacy into real-world systems.



tags: , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



The great wildebeest migration, seen from space: satellites and AI are helping count Africa’s wildlife

  27 Oct 2025
Researchers analysed satellite imagery of the Serengeti-Mara ecosystem from 2022 and 2023.

New AI tool helps match enzymes to substrates

  24 Oct 2025
A new machine learning-powered tool can help researchers determine how well an enzyme fits with a desired target.

#AIES2025 social media round-up

  24 Oct 2025
Find out what participants got up to at the Conference on Artificial Intelligence, Ethics, and Society.

Looking ahead to #ECAI2025

  23 Oct 2025
Find out what the programme has in store at the European Conference on AI.

Congratulations to the #AIES2025 best paper award winners!

  21 Oct 2025
The four winners of best paper prizes were announced during the opening ceremony at AIES.

From the telegraph to AI, our communications systems have always had hidden environmental costs

  20 Oct 2025
Drawing parallels between new technologies of the past and today.

What’s on the programme at #AIES2025?

  17 Oct 2025
The conference on AI, ethics, and society will take place in Madrid from 20-22 October.

Generative AI model maps how a new antibiotic targets gut bacteria

  16 Oct 2025
Researchers used a GenAI model to reveal how a narrow-spectrum antibiotic attacks disease-causing bacteria.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence