ΑΙhub.org
 

Interview with Lea Demelius: Researching differential privacy


by
25 March 2025



share this:

In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In the latest interview, we hear from Lea Demelius, who is researching differential privacy.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I am studying at the University of Technology Graz in Austria. My research focuses on differential privacy, which is widely regarded as the state-of-the-art for protecting privacy in data analysis and machine learning. I investigate the trade-offs and synergies that arise between various requirements for trustworthy AI – in particular privacy and fairness, with the goal of advancing the adoption of responsible machine learning models and shedding light on the practical and societal implications of integrating differential privacy into real-world systems.

Could you give us an overview of the research you’ve carried out so far during your PhD?

At the beginning of my PhD, I conducted an extensive literature analysis on recent advances of differential privacy in deep learning. After a long and thorough review process, the results are now published at ACM Computing Surveys. I also worked on privacy and security in AI in general, focusing on data protection during computations, which not only includes differential privacy but also other privacy-enhancing technologies such as homomorphic encryption, multi-party computation, and federated learning.

Over the course of my literature analysis, I came across an intriguing line of research that showed that differential privacy has a disparate impact on model accuracy. While the trade-off between privacy and utility – such as overall accuracy – is a well-recognized challenge, these studies show that boosting privacy protections in machine learning models with differential privacy impacts certain sub-groups disproportionately, raising a significant fairness concern. This trade-off between privacy and fairness is far less understood, so I decided to address some open questions I identified. In particular, I examined how the choice of a) metrics and b) hyperparameters affect the trade-off.

Is there an aspect of your research that has been particularly interesting?

I love that my research challenges me to think in a highly technical and logical way – differential privacy, after all, is a rigorous mathematic framework – while also keeping the broader socio-technological context in mind, considering both societal implications of technology and society’s expectations around privacy and fairness.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

I plan to continue exploring the trade-offs between (differential) privacy, utility and fairness. Additionally, I am interested in practical attacks on sensitive data and machine learning models, as they can both inform our decisions on how to balance these trade-offs and reveal new vulnerabilities that require novel solutions. In the long term, I might broaden my research to other aspects of trustworthy AI, such as explainability or robustness, where there are also interesting trade-offs and potential synergies to investigate.

What made you want to study AI?

At first, my motivation was primarily based on fascination with the novel technologies and opportunities AI brought along. I was eager to understand how machine learning models work and how they can be improved. But I quickly realized that for my PhD, that would not be enough for me. I want to work on something that not only fascinates me, but also benefits society. Given the widespread use of AI models today, I believe it is crucial to develop technical solutions that enhance the trustworthiness of AI, such as improving privacy and fairness, but also robustness and explainability. With my research, I aim to contribute to the adoption of machine learning systems that are more aligned with ethical principles and societal needs.

What advice would you give to someone thinking of doing a PhD in the field?

Your advisor and team are key. Make sure that you can work well together: Keeping up in a field as fast paced as AI will be so much easier together. And keep in mind that every PhD journey is different, there are so many influencing factors – both in and out of your control, so avoid comparing your journey too much with that of others.

Could you tell us an interesting (non-AI related) fact about you?

I love singing, especially with others. I founded an a cappella ensemble when I moved to Graz and I am part of the Styrian youth choir. My family also loves to sing whenever we all get together.

About Lea

Lea Demelius is a PhD student at the University of Technology, Graz (Austria) and the Know Center Research GmbH. Her research revolves around responsible AI, with a focus on differential privacy and fairness. She is particularly interested in the trade-offs and synergies that arise between various requirements for trustworthy AI, as well as the practical and societal implications of integrating differential privacy into real-world systems.



tags: , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence