ΑΙhub.org
 

Interview with Lea Demelius: Researching differential privacy


by
25 March 2025



share this:

In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In the latest interview, we hear from Lea Demelius, who is researching differential privacy.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I am studying at the University of Technology Graz in Austria. My research focuses on differential privacy, which is widely regarded as the state-of-the-art for protecting privacy in data analysis and machine learning. I investigate the trade-offs and synergies that arise between various requirements for trustworthy AI – in particular privacy and fairness, with the goal of advancing the adoption of responsible machine learning models and shedding light on the practical and societal implications of integrating differential privacy into real-world systems.

Could you give us an overview of the research you’ve carried out so far during your PhD?

At the beginning of my PhD, I conducted an extensive literature analysis on recent advances of differential privacy in deep learning. After a long and thorough review process, the results are now published at ACM Computing Surveys. I also worked on privacy and security in AI in general, focusing on data protection during computations, which not only includes differential privacy but also other privacy-enhancing technologies such as homomorphic encryption, multi-party computation, and federated learning.

Over the course of my literature analysis, I came across an intriguing line of research that showed that differential privacy has a disparate impact on model accuracy. While the trade-off between privacy and utility – such as overall accuracy – is a well-recognized challenge, these studies show that boosting privacy protections in machine learning models with differential privacy impacts certain sub-groups disproportionately, raising a significant fairness concern. This trade-off between privacy and fairness is far less understood, so I decided to address some open questions I identified. In particular, I examined how the choice of a) metrics and b) hyperparameters affect the trade-off.

Is there an aspect of your research that has been particularly interesting?

I love that my research challenges me to think in a highly technical and logical way – differential privacy, after all, is a rigorous mathematic framework – while also keeping the broader socio-technological context in mind, considering both societal implications of technology and society’s expectations around privacy and fairness.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

I plan to continue exploring the trade-offs between (differential) privacy, utility and fairness. Additionally, I am interested in practical attacks on sensitive data and machine learning models, as they can both inform our decisions on how to balance these trade-offs and reveal new vulnerabilities that require novel solutions. In the long term, I might broaden my research to other aspects of trustworthy AI, such as explainability or robustness, where there are also interesting trade-offs and potential synergies to investigate.

What made you want to study AI?

At first, my motivation was primarily based on fascination with the novel technologies and opportunities AI brought along. I was eager to understand how machine learning models work and how they can be improved. But I quickly realized that for my PhD, that would not be enough for me. I want to work on something that not only fascinates me, but also benefits society. Given the widespread use of AI models today, I believe it is crucial to develop technical solutions that enhance the trustworthiness of AI, such as improving privacy and fairness, but also robustness and explainability. With my research, I aim to contribute to the adoption of machine learning systems that are more aligned with ethical principles and societal needs.

What advice would you give to someone thinking of doing a PhD in the field?

Your advisor and team are key. Make sure that you can work well together: Keeping up in a field as fast paced as AI will be so much easier together. And keep in mind that every PhD journey is different, there are so many influencing factors – both in and out of your control, so avoid comparing your journey too much with that of others.

Could you tell us an interesting (non-AI related) fact about you?

I love singing, especially with others. I founded an a cappella ensemble when I moved to Graz and I am part of the Styrian youth choir. My family also loves to sing whenever we all get together.

About Lea

Lea Demelius is a PhD student at the University of Technology, Graz (Austria) and the Know Center Research GmbH. Her research revolves around responsible AI, with a focus on differential privacy and fairness. She is particularly interested in the trade-offs and synergies that arise between various requirements for trustworthy AI, as well as the practical and societal implications of integrating differential privacy into real-world systems.



tags: , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



monthly digest

AIhub monthly digest: June 2025 – gearing up for RoboCup 2025, privacy-preserving models, and mitigating biases in LLMs

  26 Jun 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

RoboCupRescue: an interview with Adam Jacoff

  25 Jun 2025
Find out what's new in the RoboCupRescue League this year.

Making optimal decisions without having all the cards in hand

Read about research which won an outstanding paper award at AAAI 2025.

Exploring counterfactuals in continuous-action reinforcement learning

  20 Jun 2025
Shuyang Dong writes about her work that will be presented at IJCAI 2025.

What is vibe coding? A computer scientist explains what it means to have AI write computer code − and what risks that can entail

  19 Jun 2025
Until recently, most computer code was written, at least originally, by human beings. But with the advent of GenAI, that has begun to change.

Gearing up for RoboCupJunior: Interview with Ana Patrícia Magalhães

  18 Jun 2025
We hear from the organiser of RoboCupJunior 2025 and find out how the preparations are going for the event.

Interview with Mahammed Kamruzzaman: Understanding and mitigating biases in large language models

  17 Jun 2025
Find out how Mahammed is investigating multiple facets of biases in LLMs.

Google’s SynthID is the latest tool for catching AI-made content. What is AI ‘watermarking’ and does it work?

  16 Jun 2025
Last month, Google announced SynthID Detector, a new tool to detect AI-generated content.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence